Nov 22 07:09:58 crc systemd[1]: Starting Kubernetes Kubelet... Nov 22 07:09:58 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:58 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:09:59 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:10:00 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 22 07:10:04 crc kubenswrapper[4853]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.875151 4853 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880277 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880312 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880320 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880326 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880332 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880340 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880347 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880354 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880361 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880367 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880373 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880379 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880384 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880392 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880400 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880407 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880415 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880421 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880427 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880433 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880438 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880444 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880449 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880454 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880465 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880470 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880476 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880481 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880486 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880491 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880498 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880505 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880511 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880518 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880525 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880531 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880536 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880542 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880547 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880553 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880559 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880565 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880570 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880575 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880581 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880586 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880592 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880598 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880605 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880612 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880618 4853 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880625 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880632 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880638 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880644 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880650 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880656 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880663 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880669 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880675 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880682 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880688 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880694 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880700 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880705 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880714 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880721 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880727 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880733 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880738 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.880763 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880886 4853 flags.go:64] FLAG: --address="0.0.0.0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880898 4853 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880908 4853 flags.go:64] FLAG: --anonymous-auth="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880917 4853 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880925 4853 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880931 4853 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880939 4853 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880949 4853 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880955 4853 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880961 4853 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880968 4853 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880974 4853 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880981 4853 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880987 4853 flags.go:64] FLAG: --cgroup-root="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880993 4853 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.880999 4853 flags.go:64] FLAG: --client-ca-file="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881005 4853 flags.go:64] FLAG: --cloud-config="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881011 4853 flags.go:64] FLAG: --cloud-provider="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881016 4853 flags.go:64] FLAG: --cluster-dns="[]" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881024 4853 flags.go:64] FLAG: --cluster-domain="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881031 4853 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881037 4853 flags.go:64] FLAG: --config-dir="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881043 4853 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881050 4853 flags.go:64] FLAG: --container-log-max-files="5" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881058 4853 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881064 4853 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881070 4853 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881076 4853 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881083 4853 flags.go:64] FLAG: --contention-profiling="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881089 4853 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881095 4853 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881101 4853 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881108 4853 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881115 4853 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881121 4853 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881128 4853 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881134 4853 flags.go:64] FLAG: --enable-load-reader="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881140 4853 flags.go:64] FLAG: --enable-server="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881146 4853 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881154 4853 flags.go:64] FLAG: --event-burst="100" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881160 4853 flags.go:64] FLAG: --event-qps="50" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881166 4853 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881172 4853 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881181 4853 flags.go:64] FLAG: --eviction-hard="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881189 4853 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881195 4853 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881200 4853 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881207 4853 flags.go:64] FLAG: --eviction-soft="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881213 4853 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881219 4853 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881225 4853 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881231 4853 flags.go:64] FLAG: --experimental-mounter-path="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881237 4853 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881243 4853 flags.go:64] FLAG: --fail-swap-on="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881250 4853 flags.go:64] FLAG: --feature-gates="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881257 4853 flags.go:64] FLAG: --file-check-frequency="20s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881263 4853 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881269 4853 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881275 4853 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881282 4853 flags.go:64] FLAG: --healthz-port="10248" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881295 4853 flags.go:64] FLAG: --help="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881301 4853 flags.go:64] FLAG: --hostname-override="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881307 4853 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881313 4853 flags.go:64] FLAG: --http-check-frequency="20s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881321 4853 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881327 4853 flags.go:64] FLAG: --image-credential-provider-config="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881333 4853 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881339 4853 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881345 4853 flags.go:64] FLAG: --image-service-endpoint="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881350 4853 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881357 4853 flags.go:64] FLAG: --kube-api-burst="100" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881363 4853 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881369 4853 flags.go:64] FLAG: --kube-api-qps="50" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881375 4853 flags.go:64] FLAG: --kube-reserved="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881381 4853 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881387 4853 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881393 4853 flags.go:64] FLAG: --kubelet-cgroups="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881399 4853 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881405 4853 flags.go:64] FLAG: --lock-file="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881415 4853 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881421 4853 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881427 4853 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881441 4853 flags.go:64] FLAG: --log-json-split-stream="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881447 4853 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881453 4853 flags.go:64] FLAG: --log-text-split-stream="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881459 4853 flags.go:64] FLAG: --logging-format="text" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881465 4853 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881472 4853 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881478 4853 flags.go:64] FLAG: --manifest-url="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881484 4853 flags.go:64] FLAG: --manifest-url-header="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881492 4853 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881498 4853 flags.go:64] FLAG: --max-open-files="1000000" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881505 4853 flags.go:64] FLAG: --max-pods="110" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881512 4853 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881519 4853 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881526 4853 flags.go:64] FLAG: --memory-manager-policy="None" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881533 4853 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881540 4853 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881546 4853 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881552 4853 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881566 4853 flags.go:64] FLAG: --node-status-max-images="50" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881572 4853 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881578 4853 flags.go:64] FLAG: --oom-score-adj="-999" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881584 4853 flags.go:64] FLAG: --pod-cidr="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881591 4853 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881602 4853 flags.go:64] FLAG: --pod-manifest-path="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881609 4853 flags.go:64] FLAG: --pod-max-pids="-1" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881617 4853 flags.go:64] FLAG: --pods-per-core="0" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881624 4853 flags.go:64] FLAG: --port="10250" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881631 4853 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881638 4853 flags.go:64] FLAG: --provider-id="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881646 4853 flags.go:64] FLAG: --qos-reserved="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881652 4853 flags.go:64] FLAG: --read-only-port="10255" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881660 4853 flags.go:64] FLAG: --register-node="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881667 4853 flags.go:64] FLAG: --register-schedulable="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881676 4853 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881690 4853 flags.go:64] FLAG: --registry-burst="10" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881697 4853 flags.go:64] FLAG: --registry-qps="5" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881704 4853 flags.go:64] FLAG: --reserved-cpus="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881711 4853 flags.go:64] FLAG: --reserved-memory="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881721 4853 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881728 4853 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881734 4853 flags.go:64] FLAG: --rotate-certificates="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881740 4853 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881772 4853 flags.go:64] FLAG: --runonce="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881781 4853 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881790 4853 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881798 4853 flags.go:64] FLAG: --seccomp-default="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881806 4853 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881814 4853 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881824 4853 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881832 4853 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881841 4853 flags.go:64] FLAG: --storage-driver-password="root" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881848 4853 flags.go:64] FLAG: --storage-driver-secure="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881856 4853 flags.go:64] FLAG: --storage-driver-table="stats" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881863 4853 flags.go:64] FLAG: --storage-driver-user="root" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881871 4853 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881878 4853 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881884 4853 flags.go:64] FLAG: --system-cgroups="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881890 4853 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881900 4853 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881907 4853 flags.go:64] FLAG: --tls-cert-file="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881913 4853 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881923 4853 flags.go:64] FLAG: --tls-min-version="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881929 4853 flags.go:64] FLAG: --tls-private-key-file="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881935 4853 flags.go:64] FLAG: --topology-manager-policy="none" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881941 4853 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881948 4853 flags.go:64] FLAG: --topology-manager-scope="container" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881954 4853 flags.go:64] FLAG: --v="2" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881963 4853 flags.go:64] FLAG: --version="false" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881971 4853 flags.go:64] FLAG: --vmodule="" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881983 4853 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.881990 4853 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882170 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882184 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882193 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882200 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882208 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882217 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882224 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882232 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882240 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882247 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882254 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882260 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882266 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882272 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882278 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882284 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882290 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882295 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882302 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882309 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882315 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882321 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882327 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882333 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882339 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882344 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882350 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882355 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882361 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882366 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882371 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882377 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882382 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882387 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882394 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882399 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882404 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882412 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882419 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882426 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882432 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882438 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882444 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882450 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882457 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882464 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882470 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882476 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882482 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882488 4853 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882493 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882499 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882504 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882510 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882515 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882520 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882526 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882531 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882536 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882541 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882547 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882552 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882557 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882563 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882568 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882573 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882578 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882584 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882592 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882602 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.882619 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.882632 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.927415 4853 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.927474 4853 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927632 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927660 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927670 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927678 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927688 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927698 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927719 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927730 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927739 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927776 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927786 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927805 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927814 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927822 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927831 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927838 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927846 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927854 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927861 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927870 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927878 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927887 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927895 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927904 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927912 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927923 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927931 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927939 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927946 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927954 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927964 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927974 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927984 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.927993 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928001 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928009 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928017 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928024 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928033 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928040 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928048 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928056 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928063 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928071 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928078 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928086 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928094 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928101 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928109 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928117 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928128 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928138 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928147 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928155 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928162 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928170 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928179 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928187 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928195 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928203 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928211 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928221 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928229 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928236 4853 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928244 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928252 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928261 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928271 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928281 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928291 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928301 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.928317 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928602 4853 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928619 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928627 4853 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928635 4853 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928644 4853 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928652 4853 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928660 4853 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928670 4853 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928682 4853 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928693 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928704 4853 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928712 4853 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928721 4853 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928731 4853 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928739 4853 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928775 4853 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928785 4853 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928794 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928803 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928811 4853 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928820 4853 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928829 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928837 4853 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928845 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928853 4853 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928879 4853 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928886 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928894 4853 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928901 4853 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928909 4853 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928917 4853 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928927 4853 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928937 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928946 4853 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928957 4853 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928966 4853 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928975 4853 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928983 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928992 4853 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.928999 4853 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929007 4853 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929026 4853 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929034 4853 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929041 4853 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929050 4853 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929058 4853 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929066 4853 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929074 4853 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929082 4853 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929089 4853 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929097 4853 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929104 4853 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929112 4853 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929120 4853 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929127 4853 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929134 4853 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929142 4853 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929150 4853 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929157 4853 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929165 4853 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929172 4853 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929181 4853 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929188 4853 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929196 4853 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929204 4853 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929211 4853 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929219 4853 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929238 4853 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929246 4853 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929253 4853 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:10:04 crc kubenswrapper[4853]: W1122 07:10:04.929261 4853 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.929273 4853 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.929565 4853 server.go:940] "Client rotation is on, will bootstrap in background" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.948304 4853 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.948454 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.950610 4853 server.go:997] "Starting client certificate rotation" Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.950665 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.950896 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-15 18:32:44.454063503 +0000 UTC Nov 22 07:10:04 crc kubenswrapper[4853]: I1122 07:10:04.951034 4853 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1307h22m39.503033567s for next certificate rotation Nov 22 07:10:05 crc kubenswrapper[4853]: I1122 07:10:05.052405 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:10:05 crc kubenswrapper[4853]: I1122 07:10:05.055767 4853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:10:05 crc kubenswrapper[4853]: I1122 07:10:05.312484 4853 log.go:25] "Validated CRI v1 runtime API" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.557641 4853 log.go:25] "Validated CRI v1 image API" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.574576 4853 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.587712 4853 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-22-07-04-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.587831 4853 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0}] Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.630500 4853 manager.go:217] Machine: {Timestamp:2025-11-22 07:10:15.622942873 +0000 UTC m=+14.463565519 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:362c9708-b683-4c02-a83b-39323a200ef4 BootID:d74141ce-7696-4d74-b510-3a9c2c375ecd Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:31:9b:41 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:31:9b:41 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:44:2e:33 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3e:7d:f5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b5:41:f0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:56:5d:85 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:3f:54:e6:da:ab Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:9e:8b:8b:e8:ee:62 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.631185 4853 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.631521 4853 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.642799 4853 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.643239 4853 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.643309 4853 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.645119 4853 topology_manager.go:138] "Creating topology manager with none policy" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.645236 4853 container_manager_linux.go:303] "Creating device plugin manager" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.645961 4853 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.646015 4853 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.646373 4853 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.646542 4853 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.651461 4853 kubelet.go:418] "Attempting to sync node with API server" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.651512 4853 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.651543 4853 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.651567 4853 kubelet.go:324] "Adding apiserver pod source" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.651588 4853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.661247 4853 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.662472 4853 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.666969 4853 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677732 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677813 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677830 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677847 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677872 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677907 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677925 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677952 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 22 07:10:15 crc kubenswrapper[4853]: W1122 07:10:15.677801 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.677973 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.678064 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.678086 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.678101 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 22 07:10:15 crc kubenswrapper[4853]: W1122 07:10:15.677807 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.678139 4853 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.678128 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.678186 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.678829 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.679004 4853 server.go:1280] "Started kubelet" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.679289 4853 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.679542 4853 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.680248 4853 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 22 07:10:15 crc systemd[1]: Started Kubernetes Kubelet. Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.681683 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.681731 4853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.681857 4853 server.go:460] "Adding debug handlers to kubelet server" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.682313 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.682418 4853 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.682434 4853 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.682611 4853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 22 07:10:15 crc kubenswrapper[4853]: W1122 07:10:15.684122 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.684229 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.685627 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:31:50.375970228 +0000 UTC Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.686767 4853 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 716h21m34.689229103s for next certificate rotation Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.688429 4853 factory.go:55] Registering systemd factory Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.688471 4853 factory.go:221] Registration of the systemd container factory successfully Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.687597 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="200ms" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.695430 4853 factory.go:153] Registering CRI-O factory Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.695715 4853 factory.go:221] Registration of the crio container factory successfully Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.697587 4853 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.697834 4853 factory.go:103] Registering Raw factory Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.698222 4853 manager.go:1196] Started watching for new ooms in manager Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.702258 4853 manager.go:319] Starting recovery of all containers Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.711868 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.712617 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.701516 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a4295e80135b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,LastTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.712646 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.712805 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.712825 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715607 4853 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715669 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715697 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715722 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715771 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715797 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715819 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715840 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715859 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715881 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715900 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715921 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715939 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715956 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.715976 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716056 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716075 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716118 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716143 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716159 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716177 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716196 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716239 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716291 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716309 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716327 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716345 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716414 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716435 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716454 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716567 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716606 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716630 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716664 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716687 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716705 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716723 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716742 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716807 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716829 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716847 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716864 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716885 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716903 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716920 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716936 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716952 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.716968 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717038 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717093 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717143 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717160 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717178 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717213 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717229 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717265 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717280 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717295 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717310 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717325 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717340 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717355 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717371 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717403 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717422 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717440 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717455 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717471 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717486 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717502 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717534 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717555 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717572 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717589 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717604 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717621 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717642 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717672 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717687 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717702 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717718 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717733 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717788 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717804 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717819 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717852 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717870 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717900 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717915 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717947 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717963 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717977 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.717991 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718009 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718027 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718043 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718059 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718074 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718090 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718106 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718146 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718169 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718186 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718204 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718221 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718239 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718255 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718272 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718289 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718306 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718321 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718337 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718352 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718368 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718385 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718400 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718416 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718432 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718447 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718485 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718501 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718516 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718533 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718579 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718608 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718623 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718639 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718667 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718682 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718696 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718711 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718725 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718769 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718793 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718808 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718824 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718839 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718867 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718882 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718899 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718936 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718952 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718967 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718982 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.718998 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719013 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719029 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719044 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719118 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719133 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719146 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719165 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719180 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719196 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719212 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719226 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719242 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719256 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719272 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719287 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719305 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719321 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719335 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719351 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719366 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719386 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719401 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719417 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719433 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719452 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719469 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719484 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719499 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719515 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719530 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719546 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719565 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719580 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719597 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719614 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719632 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719649 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719666 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719683 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719700 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719716 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719734 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719774 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719796 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719813 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719829 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719847 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719864 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719879 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719894 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719909 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719923 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719938 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719955 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719973 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.719990 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.720008 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.720030 4853 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.720046 4853 reconstruct.go:97] "Volume reconstruction finished" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.720057 4853 reconciler.go:26] "Reconciler: start to sync state" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.732388 4853 manager.go:324] Recovery completed Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.743147 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.746139 4853 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.746274 4853 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.746379 4853 kubelet.go:2335] "Starting kubelet main sync loop" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.746500 4853 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 22 07:10:15 crc kubenswrapper[4853]: W1122 07:10:15.748011 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.748101 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.753049 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.755321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.755392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.755405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.756318 4853 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.756342 4853 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.756376 4853 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.782673 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.846978 4853 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.883362 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.891482 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="400ms" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.967205 4853 policy_none.go:49] "None policy: Start" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.969237 4853 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 22 07:10:15 crc kubenswrapper[4853]: I1122 07:10:15.969322 4853 state_mem.go:35] "Initializing new in-memory state store" Nov 22 07:10:15 crc kubenswrapper[4853]: E1122 07:10:15.983638 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.047091 4853 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.084515 4853 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.100385 4853 manager.go:334] "Starting Device Plugin manager" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.100500 4853 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.100531 4853 server.go:79] "Starting device plugin registration server" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.101483 4853 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.101523 4853 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.101704 4853 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.101944 4853 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.101963 4853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.116618 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.202684 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.204451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.204506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.204525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.204563 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.205159 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.292657 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="800ms" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.405876 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.407577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.407618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.407632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.407678 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.408277 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.448133 4853 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.448320 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.449881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.449930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.449943 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.450148 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.450427 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.450480 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451128 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451254 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451608 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.451637 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.452539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453255 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453702 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.453990 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.454584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.454624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.454637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.454897 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.455136 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.455206 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.455633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.455668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.455686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456574 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456694 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.456741 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.457582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.457615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.457627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.515515 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.515662 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529626 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529689 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529709 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529727 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529768 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529787 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529938 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.529963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.530068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.530164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.530187 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.530202 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631314 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631332 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631383 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631549 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631544 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631647 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631641 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631977 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.631974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632097 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632167 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632184 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632188 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632208 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632232 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.632255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.679980 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.782606 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.788547 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.808458 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.808678 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.808804 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.809681 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.809729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.809786 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.809796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.809819 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.810112 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.826236 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: I1122 07:10:16.838974 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.882820 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-0485a6ea52713455e808d03864e00f5930ab4cdc82b9b5d2e73cc7c3001382c2 WatchSource:0}: Error finding container 0485a6ea52713455e808d03864e00f5930ab4cdc82b9b5d2e73cc7c3001382c2: Status 404 returned error can't find the container with id 0485a6ea52713455e808d03864e00f5930ab4cdc82b9b5d2e73cc7c3001382c2 Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.884951 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7313c08eadaca29b4dae665768bbb7361116f61a281386a1b9814ed267c2794d WatchSource:0}: Error finding container 7313c08eadaca29b4dae665768bbb7361116f61a281386a1b9814ed267c2794d: Status 404 returned error can't find the container with id 7313c08eadaca29b4dae665768bbb7361116f61a281386a1b9814ed267c2794d Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.893148 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-12ad4c0bc55fd262d74bb909a20dca2e00610903caec457c6546539f7e3334b8 WatchSource:0}: Error finding container 12ad4c0bc55fd262d74bb909a20dca2e00610903caec457c6546539f7e3334b8: Status 404 returned error can't find the container with id 12ad4c0bc55fd262d74bb909a20dca2e00610903caec457c6546539f7e3334b8 Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.894507 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-61f27685d3fc960013b9f6229a076768e25da86d500cf273d7745e94f73ac7f5 WatchSource:0}: Error finding container 61f27685d3fc960013b9f6229a076768e25da86d500cf273d7745e94f73ac7f5: Status 404 returned error can't find the container with id 61f27685d3fc960013b9f6229a076768e25da86d500cf273d7745e94f73ac7f5 Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.902312 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-3348ca312eecc6f413340938223461caebee6c79a61481cf8ba60e633f91acb5 WatchSource:0}: Error finding container 3348ca312eecc6f413340938223461caebee6c79a61481cf8ba60e633f91acb5: Status 404 returned error can't find the container with id 3348ca312eecc6f413340938223461caebee6c79a61481cf8ba60e633f91acb5 Nov 22 07:10:16 crc kubenswrapper[4853]: W1122 07:10:16.986113 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:16 crc kubenswrapper[4853]: E1122 07:10:16.986205 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:17 crc kubenswrapper[4853]: E1122 07:10:17.093786 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="1.6s" Nov 22 07:10:17 crc kubenswrapper[4853]: W1122 07:10:17.275871 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:17 crc kubenswrapper[4853]: E1122 07:10:17.276006 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.610360 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.612430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.612488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.612509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.612558 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:17 crc kubenswrapper[4853]: E1122 07:10:17.613372 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.680548 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.755676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3348ca312eecc6f413340938223461caebee6c79a61481cf8ba60e633f91acb5"} Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.756795 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61f27685d3fc960013b9f6229a076768e25da86d500cf273d7745e94f73ac7f5"} Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.758790 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"12ad4c0bc55fd262d74bb909a20dca2e00610903caec457c6546539f7e3334b8"} Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.760032 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7313c08eadaca29b4dae665768bbb7361116f61a281386a1b9814ed267c2794d"} Nov 22 07:10:17 crc kubenswrapper[4853]: I1122 07:10:17.761045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0485a6ea52713455e808d03864e00f5930ab4cdc82b9b5d2e73cc7c3001382c2"} Nov 22 07:10:18 crc kubenswrapper[4853]: I1122 07:10:18.680200 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:18 crc kubenswrapper[4853]: E1122 07:10:18.695255 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="3.2s" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.214598 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.217519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.217590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.217616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.217655 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:19 crc kubenswrapper[4853]: E1122 07:10:19.218614 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:19 crc kubenswrapper[4853]: W1122 07:10:19.315335 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:19 crc kubenswrapper[4853]: E1122 07:10:19.315507 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:19 crc kubenswrapper[4853]: W1122 07:10:19.470593 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:19 crc kubenswrapper[4853]: E1122 07:10:19.470719 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:19 crc kubenswrapper[4853]: W1122 07:10:19.551952 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:19 crc kubenswrapper[4853]: E1122 07:10:19.552101 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.679861 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.769889 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3"} Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.772718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165"} Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.772963 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.774910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.774988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.775007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.775918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e"} Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.778082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b"} Nov 22 07:10:19 crc kubenswrapper[4853]: I1122 07:10:19.780380 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0"} Nov 22 07:10:19 crc kubenswrapper[4853]: W1122 07:10:19.840374 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:19 crc kubenswrapper[4853]: E1122 07:10:19.840548 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.680265 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.785727 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e" exitCode=0 Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.785846 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e"} Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.786260 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788159 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788731 4853 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b" exitCode=0 Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b"} Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.788893 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.790270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.790315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.790336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.790736 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.792273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.792339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.792364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.793578 4853 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0" exitCode=0 Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.793710 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.793711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0"} Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.795057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.795104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.795118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.796360 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165" exitCode=0 Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.796421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165"} Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.796524 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.798081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.798137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:20 crc kubenswrapper[4853]: I1122 07:10:20.798183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:21 crc kubenswrapper[4853]: E1122 07:10:21.381511 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a4295e80135b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,LastTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.680508 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.801426 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03"} Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.804221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143"} Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.807033 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a"} Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.809867 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d" exitCode=0 Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.809984 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d"} Nov 22 07:10:21 crc kubenswrapper[4853]: I1122 07:10:21.812034 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c"} Nov 22 07:10:21 crc kubenswrapper[4853]: E1122 07:10:21.896312 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="6.4s" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.419207 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.421030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.421157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.421224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.421310 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:22 crc kubenswrapper[4853]: E1122 07:10:22.421757 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:22 crc kubenswrapper[4853]: I1122 07:10:22.680591 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:23 crc kubenswrapper[4853]: W1122 07:10:23.104970 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:23 crc kubenswrapper[4853]: E1122 07:10:23.105115 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.680569 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.820952 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa"} Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.821013 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.821994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.822059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:23 crc kubenswrapper[4853]: I1122 07:10:23.822081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:23 crc kubenswrapper[4853]: W1122 07:10:23.828289 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:23 crc kubenswrapper[4853]: E1122 07:10:23.828410 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.680018 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.824932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831"} Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.828409 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a"} Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.828478 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.829973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.830010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:24 crc kubenswrapper[4853]: I1122 07:10:24.830023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:25 crc kubenswrapper[4853]: W1122 07:10:25.001238 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:25 crc kubenswrapper[4853]: E1122 07:10:25.001349 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:25 crc kubenswrapper[4853]: W1122 07:10:25.609939 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:25 crc kubenswrapper[4853]: E1122 07:10:25.610115 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.679884 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.835994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e"} Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.841612 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026"} Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.841783 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.842902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.842950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.842962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.844741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e"} Nov 22 07:10:25 crc kubenswrapper[4853]: I1122 07:10:25.848013 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6"} Nov 22 07:10:26 crc kubenswrapper[4853]: E1122 07:10:26.117445 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.680590 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.856843 4853 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e" exitCode=0 Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.856975 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e"} Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.857035 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.857506 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.857960 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:26 crc kubenswrapper[4853]: I1122 07:10:26.862609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.409979 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.610865 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.680404 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.867318 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef"} Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.867370 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.867418 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868883 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.868908 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:27 crc kubenswrapper[4853]: I1122 07:10:27.964012 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:28 crc kubenswrapper[4853]: E1122 07:10:28.297882 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.251:6443: connect: connection refused" interval="7s" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.680569 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.822065 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.824169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.824315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.824355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.824416 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:28 crc kubenswrapper[4853]: E1122 07:10:28.825451 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.251:6443: connect: connection refused" node="crc" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.872987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5"} Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.873074 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.873163 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.874275 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.874325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.874336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.874979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.875037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:28 crc kubenswrapper[4853]: I1122 07:10:28.875056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:29 crc kubenswrapper[4853]: I1122 07:10:29.680448 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:29 crc kubenswrapper[4853]: I1122 07:10:29.879233 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea"} Nov 22 07:10:29 crc kubenswrapper[4853]: I1122 07:10:29.882476 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7c6c10ea9ed07ab397ea929e01c02644454362fa35faaca4bf427c6799a155c1"} Nov 22 07:10:30 crc kubenswrapper[4853]: W1122 07:10:30.320238 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:30 crc kubenswrapper[4853]: E1122 07:10:30.320399 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.410375 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.410514 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.680489 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.897474 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe"} Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.897616 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.899190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.899288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:30 crc kubenswrapper[4853]: I1122 07:10:30.899314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.172999 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:31 crc kubenswrapper[4853]: E1122 07:10:31.383573 4853 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.251:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a4295e80135b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,LastTimestamp:2025-11-22 07:10:15.67892421 +0000 UTC m=+14.519546876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.679918 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.864175 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.864584 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.864717 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.902988 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.905481 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c6c10ea9ed07ab397ea929e01c02644454362fa35faaca4bf427c6799a155c1" exitCode=255 Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.905540 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7c6c10ea9ed07ab397ea929e01c02644454362fa35faaca4bf427c6799a155c1"} Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.905666 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.907010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.907045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.907057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:31 crc kubenswrapper[4853]: I1122 07:10:31.907712 4853 scope.go:117] "RemoveContainer" containerID="7c6c10ea9ed07ab397ea929e01c02644454362fa35faaca4bf427c6799a155c1" Nov 22 07:10:32 crc kubenswrapper[4853]: I1122 07:10:32.679665 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:32 crc kubenswrapper[4853]: W1122 07:10:32.775798 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.251:6443: connect: connection refused Nov 22 07:10:32 crc kubenswrapper[4853]: E1122 07:10:32.775909 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.251:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:10:32 crc kubenswrapper[4853]: I1122 07:10:32.914269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f"} Nov 22 07:10:32 crc kubenswrapper[4853]: I1122 07:10:32.916510 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:10:32 crc kubenswrapper[4853]: I1122 07:10:32.920109 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f"} Nov 22 07:10:33 crc kubenswrapper[4853]: I1122 07:10:33.923104 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:33 crc kubenswrapper[4853]: I1122 07:10:33.923201 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:33 crc kubenswrapper[4853]: I1122 07:10:33.928601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:33 crc kubenswrapper[4853]: I1122 07:10:33.928673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:33 crc kubenswrapper[4853]: I1122 07:10:33.928696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.097153 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.097477 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.099916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.099992 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.100013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.316199 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.635804 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.934623 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669"} Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.934858 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.934893 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.935140 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.937248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:34 crc kubenswrapper[4853]: I1122 07:10:34.942347 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.825855 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.828083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.828156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.828175 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.828213 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.938962 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.939037 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.938970 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.940816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.940913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.940931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941235 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:35 crc kubenswrapper[4853]: I1122 07:10:35.941279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.040683 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 22 07:10:36 crc kubenswrapper[4853]: E1122 07:10:36.118194 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.942111 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.942158 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946716 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:36 crc kubenswrapper[4853]: I1122 07:10:36.946959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:39 crc kubenswrapper[4853]: I1122 07:10:39.567425 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 22 07:10:39 crc kubenswrapper[4853]: I1122 07:10:39.568551 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:39 crc kubenswrapper[4853]: I1122 07:10:39.570185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:39 crc kubenswrapper[4853]: I1122 07:10:39.570345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:39 crc kubenswrapper[4853]: I1122 07:10:39.570438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:40 crc kubenswrapper[4853]: I1122 07:10:40.410827 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:40 crc kubenswrapper[4853]: I1122 07:10:40.411812 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:10:42 crc kubenswrapper[4853]: W1122 07:10:42.989692 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:10:42 crc kubenswrapper[4853]: I1122 07:10:42.989928 4853 trace.go:236] Trace[1619985482]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:10:32.987) (total time: 10002ms): Nov 22 07:10:42 crc kubenswrapper[4853]: Trace[1619985482]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:10:42.989) Nov 22 07:10:42 crc kubenswrapper[4853]: Trace[1619985482]: [10.002205279s] [10.002205279s] END Nov 22 07:10:42 crc kubenswrapper[4853]: E1122 07:10:42.989967 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 07:10:43 crc kubenswrapper[4853]: W1122 07:10:43.199362 4853 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:10:43 crc kubenswrapper[4853]: I1122 07:10:43.199509 4853 trace.go:236] Trace[2006772603]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:10:33.197) (total time: 10002ms): Nov 22 07:10:43 crc kubenswrapper[4853]: Trace[2006772603]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:10:43.199) Nov 22 07:10:43 crc kubenswrapper[4853]: Trace[2006772603]: [10.002024986s] [10.002024986s] END Nov 22 07:10:43 crc kubenswrapper[4853]: E1122 07:10:43.199549 4853 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 07:10:43 crc kubenswrapper[4853]: I1122 07:10:43.680645 4853 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:10:45 crc kubenswrapper[4853]: E1122 07:10:45.298844 4853 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Nov 22 07:10:45 crc kubenswrapper[4853]: E1122 07:10:45.830128 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 22 07:10:46 crc kubenswrapper[4853]: E1122 07:10:46.121621 4853 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:10:46 crc kubenswrapper[4853]: I1122 07:10:46.864195 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:46 crc kubenswrapper[4853]: I1122 07:10:46.864412 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:10:47 crc kubenswrapper[4853]: I1122 07:10:47.040936 4853 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Startup probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:47 crc kubenswrapper[4853]: I1122 07:10:47.041129 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:10:48 crc kubenswrapper[4853]: I1122 07:10:48.438902 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 07:10:48 crc kubenswrapper[4853]: I1122 07:10:48.439030 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.410979 4853 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.411072 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.411139 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.411305 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.412991 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.413070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.413095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.414045 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Nov 22 07:10:50 crc kubenswrapper[4853]: I1122 07:10:50.414424 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a" gracePeriod=30 Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.872677 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.873061 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.875551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.875615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.875642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.881441 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.987848 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.987939 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.989594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.989679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:51 crc kubenswrapper[4853]: I1122 07:10:51.989700 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4853]: I1122 07:10:52.830321 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:52 crc kubenswrapper[4853]: I1122 07:10:52.832903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:52 crc kubenswrapper[4853]: I1122 07:10:52.832987 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:52 crc kubenswrapper[4853]: I1122 07:10:52.833010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:52 crc kubenswrapper[4853]: I1122 07:10:52.833066 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:52 crc kubenswrapper[4853]: E1122 07:10:52.839354 4853 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.422083 4853 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.449834 4853 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.514618 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.514685 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.514638 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.514827 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.518138 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45426->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.518208 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45426->192.168.126.11:17697: read: connection reset by peer" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.998472 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.999520 4853 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a" exitCode=255 Nov 22 07:10:53 crc kubenswrapper[4853]: I1122 07:10:53.999604 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a"} Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.002988 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.003691 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.006032 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f" exitCode=255 Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.006100 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f"} Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.006175 4853 scope.go:117] "RemoveContainer" containerID="7c6c10ea9ed07ab397ea929e01c02644454362fa35faaca4bf427c6799a155c1" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.006374 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.007713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.007983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.008042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.009598 4853 scope.go:117] "RemoveContainer" containerID="c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f" Nov 22 07:10:54 crc kubenswrapper[4853]: E1122 07:10:54.010128 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.496723 4853 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.676214 4853 apiserver.go:52] "Watching apiserver" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.883200 4853 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.883835 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.884481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.884617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.884906 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:54 crc kubenswrapper[4853]: E1122 07:10:54.884898 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.884959 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:54 crc kubenswrapper[4853]: E1122 07:10:54.885146 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.885514 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.885589 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:54 crc kubenswrapper[4853]: E1122 07:10:54.885614 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.888710 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.889134 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.889199 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.889386 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.891808 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.892168 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.892674 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.893724 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.893895 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.936996 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.958353 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.979342 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.983487 4853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 22 07:10:54 crc kubenswrapper[4853]: I1122 07:10:54.997548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.011217 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.011719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9"} Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.013194 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.014720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.030718 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032343 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032391 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032423 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032483 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032507 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032529 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032552 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032574 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032598 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032655 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032679 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032702 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032724 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032774 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032797 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032823 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032850 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032874 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032899 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032921 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032971 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032975 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032995 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.032998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033052 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033121 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033225 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033262 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033295 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033322 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033350 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033382 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033570 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033604 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033637 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033685 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033712 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033782 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033807 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033830 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033854 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033877 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033901 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033924 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034017 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034045 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034072 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034133 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034167 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034194 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034222 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034254 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034278 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034351 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034376 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034399 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034425 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034477 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034505 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034534 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034562 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034590 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034616 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034641 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034668 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034720 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034770 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034799 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034826 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034877 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034905 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034930 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034958 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034987 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035044 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035072 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035119 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035149 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035181 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035206 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035229 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035257 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035315 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035359 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035381 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035434 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035459 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035482 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035503 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035530 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035550 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035576 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035598 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035619 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035640 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035659 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035703 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035772 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035797 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035822 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035849 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035929 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035955 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035984 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036009 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036032 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036056 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036083 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036105 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036129 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036154 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036179 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036250 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036272 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036295 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036351 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036377 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036403 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036475 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036506 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036528 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036553 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036576 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036599 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036622 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036646 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036669 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036696 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036720 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036814 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036841 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036883 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036903 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036924 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036972 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036992 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037012 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037033 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037053 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037089 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037128 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037149 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037208 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037225 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037263 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037282 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037301 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037319 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037337 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037355 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037374 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037437 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037367 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037470 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037497 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037527 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037554 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037578 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037605 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037631 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037655 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037678 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037700 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037720 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037742 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037862 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037998 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038031 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038060 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038087 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038140 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038174 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038226 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038371 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038386 4853 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038400 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038411 4853 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038422 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033218 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.052649 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.052723 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.052867 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053089 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033370 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033476 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033505 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033677 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033793 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033814 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034116 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034191 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034394 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034508 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034382 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034606 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034715 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034807 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.034844 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035883 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053434 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035791 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035952 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.035965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037093 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037098 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037160 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.036903 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037469 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037531 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.037963 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038352 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.038554 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038629 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038686 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038712 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.038773 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039006 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039236 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039427 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.039673 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040192 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040450 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040477 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040588 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040679 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040733 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040121 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040944 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.040988 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041250 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041262 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041270 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041544 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041671 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041834 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041906 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.041939 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.042063 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.042155 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.042363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.042426 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:10:55.54239066 +0000 UTC m=+54.383013456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.043063 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.043565 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.043934 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044064 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044112 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044223 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044502 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.044983 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.045062 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.045103 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.045147 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.045562 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.045694 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046262 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046465 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046703 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046928 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.046942 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047397 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047425 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047462 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047711 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.047821 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.048875 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.050894 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.051057 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.051867 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053348 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.033377 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053856 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.053981 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.054039 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.054315 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.054577 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.054537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.058114 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.054934 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055194 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055241 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.055359 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:55.55532676 +0000 UTC m=+54.395949386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.058492 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:55.558378459 +0000 UTC m=+54.399001235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.058771 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.059074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055419 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055633 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055692 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.055863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.056695 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.056896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.056945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.056967 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.057160 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.057679 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.057480 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.057778 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.057808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.060070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.060384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.060645 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.060738 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061111 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061146 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061228 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061431 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061643 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.062004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.062473 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.063196 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.063047 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.063352 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.064262 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.064626 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.065156 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.065214 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.067233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.068028 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.061354 4853 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.072636 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.072669 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.072689 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.072793 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:55.572763686 +0000 UTC m=+54.413386462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.073002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.073133 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.073990 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.074163 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.074226 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.074435 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.074898 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.075455 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.075931 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.077116 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.077287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.077472 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.077535 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.077551 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.077592 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:55.577579383 +0000 UTC m=+54.418202009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.078260 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.078282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.078293 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.078621 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.079109 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.079254 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.079799 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.079855 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.080113 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.080580 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.080831 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.080594 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.080925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081117 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081301 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081638 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081689 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.081806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.082319 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.082268 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.083576 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.083888 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.084099 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.084325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.084540 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.084964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.085084 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.085444 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.085665 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.085777 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.085891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.087930 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.088029 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.088162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.091955 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.093053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.103692 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.104891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.113426 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.117278 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.118703 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.128651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139241 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139328 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139388 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139400 4853 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139412 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139422 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139433 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139442 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139453 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139462 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139473 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139484 4853 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139485 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139494 4853 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139537 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139547 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139560 4853 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139570 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139580 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139590 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139600 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139610 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139619 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139629 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139638 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139647 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139658 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139440 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139679 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139829 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139844 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139862 4853 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139875 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139911 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139932 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139944 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139976 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139972 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.139990 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140073 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140092 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140109 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140143 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140157 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140172 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140186 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140198 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140229 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140241 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140252 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140265 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140278 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140310 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140321 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140333 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140346 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140358 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140392 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140404 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140418 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140430 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140463 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140476 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140492 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140505 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140519 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140554 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140566 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140581 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140598 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140640 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140651 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140663 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140674 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140703 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140717 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140730 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140767 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140779 4853 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140790 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140804 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140816 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140849 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140862 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140873 4853 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140884 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140894 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140940 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140952 4853 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140964 4853 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.140977 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141012 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141027 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141040 4853 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141052 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141064 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141096 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141107 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141119 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141130 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141142 4853 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141173 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141184 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141196 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141208 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141220 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141253 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141265 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141278 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141293 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141304 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141336 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141349 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141360 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141425 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141438 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141450 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141463 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141475 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141509 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141547 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141559 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141571 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141609 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141621 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141637 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141663 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141679 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141694 4853 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141709 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141790 4853 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141808 4853 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141824 4853 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141842 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141857 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141873 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141888 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141901 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141916 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141930 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141944 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141958 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.141973 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142019 4853 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142033 4853 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142047 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142059 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142071 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142083 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142095 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142107 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142119 4853 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142131 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142142 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142154 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142166 4853 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142179 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142191 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142203 4853 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142214 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142227 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142238 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142252 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142266 4853 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142282 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142297 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142313 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142328 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142343 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142359 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142407 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142423 4853 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142434 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142445 4853 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142458 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142469 4853 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142484 4853 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142497 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142510 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142523 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142538 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142554 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142568 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142583 4853 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142600 4853 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142624 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142635 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142648 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142659 4853 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142692 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142703 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142714 4853 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142725 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.142737 4853 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.151940 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.204026 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.221382 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:10:55 crc kubenswrapper[4853]: W1122 07:10:55.225130 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8efbe2eedcac7bf4f2fa8fb1b21f5d77f705614083cdfb2caca8fd00ef239e59 WatchSource:0}: Error finding container 8efbe2eedcac7bf4f2fa8fb1b21f5d77f705614083cdfb2caca8fd00ef239e59: Status 404 returned error can't find the container with id 8efbe2eedcac7bf4f2fa8fb1b21f5d77f705614083cdfb2caca8fd00ef239e59 Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.232145 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:10:55 crc kubenswrapper[4853]: W1122 07:10:55.238807 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-540d2d1607c652b8fdb691b6102d440c4750b0a273d5ce8b0b5f27ed52c005dd WatchSource:0}: Error finding container 540d2d1607c652b8fdb691b6102d440c4750b0a273d5ce8b0b5f27ed52c005dd: Status 404 returned error can't find the container with id 540d2d1607c652b8fdb691b6102d440c4750b0a273d5ce8b0b5f27ed52c005dd Nov 22 07:10:55 crc kubenswrapper[4853]: W1122 07:10:55.258982 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-cc6fe3538a30f132c199cfdebd5eca4a6eb697c0640d0e57434b92e9ccc0ee21 WatchSource:0}: Error finding container cc6fe3538a30f132c199cfdebd5eca4a6eb697c0640d0e57434b92e9ccc0ee21: Status 404 returned error can't find the container with id cc6fe3538a30f132c199cfdebd5eca4a6eb697c0640d0e57434b92e9ccc0ee21 Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.546550 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.546771 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:10:56.54671344 +0000 UTC m=+55.387336076 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.648088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.648176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.648222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.648255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648435 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648518 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648542 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648556 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648455 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648458 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648806 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648619 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:56.64857311 +0000 UTC m=+55.489195926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648864 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648886 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:56.648862038 +0000 UTC m=+55.489484864 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.648930 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:56.648901799 +0000 UTC m=+55.489524665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:55 crc kubenswrapper[4853]: E1122 07:10:55.649036 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:56.648987801 +0000 UTC m=+55.489610537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.757288 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.758190 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.759309 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.760200 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.760932 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.761078 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.761992 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.762902 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.763712 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.764639 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.765417 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.766213 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.770709 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.772353 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.775209 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.777188 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.780008 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.781992 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.783131 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.785961 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.787582 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.788940 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.791914 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.792190 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.793267 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.796529 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.797746 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.800250 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.801857 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.804155 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.805624 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.806678 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.808968 4853 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.809329 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.811949 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.813140 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.813715 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.815993 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.817571 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.818322 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.819865 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.820122 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.820857 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.822131 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.823059 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.824483 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.825404 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.826621 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.827608 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.828865 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.830011 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.831392 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.832168 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.832853 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.834183 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.835049 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.835739 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.836353 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.848174 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.860866 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:55 crc kubenswrapper[4853]: I1122 07:10:55.877886 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.022065 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cc6fe3538a30f132c199cfdebd5eca4a6eb697c0640d0e57434b92e9ccc0ee21"} Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.023447 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"540d2d1607c652b8fdb691b6102d440c4750b0a273d5ce8b0b5f27ed52c005dd"} Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.024699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8efbe2eedcac7bf4f2fa8fb1b21f5d77f705614083cdfb2caca8fd00ef239e59"} Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.032968 4853 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.055016 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.068951 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.071528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.074827 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.090769 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.103897 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.119773 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.132842 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.150056 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.162024 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.176688 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.189263 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.204988 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.219831 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.231365 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.255293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.270507 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.283631 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.558670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.558854 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:10:58.558820499 +0000 UTC m=+57.399443135 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.659388 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.659454 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.659483 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.659513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.659663 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.659721 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.659740 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:58.659719185 +0000 UTC m=+57.500341811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.659733 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660003 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660050 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660054 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660080 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660128 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:58.660104165 +0000 UTC m=+57.500726821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660188 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:58.660162156 +0000 UTC m=+57.500784822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660333 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.660442 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:10:58.660420543 +0000 UTC m=+57.501043179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.747370 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.747405 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.747540 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:10:56 crc kubenswrapper[4853]: I1122 07:10:56.747576 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.747704 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:10:56 crc kubenswrapper[4853]: E1122 07:10:56.747866 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.410264 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.416638 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.436986 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.450472 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.466976 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.490194 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.508685 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.527498 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.544695 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.564086 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:57 crc kubenswrapper[4853]: I1122 07:10:57.964273 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.032374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d"} Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.034009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74"} Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.067145 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.089437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.095514 4853 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.104614 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.115986 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.142866 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.157875 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.170922 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.183435 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.578551 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.578926 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:02.57888246 +0000 UTC m=+61.419505126 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.680337 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.680414 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.680461 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.680506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680627 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680674 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680734 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:02.680705989 +0000 UTC m=+61.521328635 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680808 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:02.680787682 +0000 UTC m=+61.521410308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680676 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680862 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680939 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680964 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.680955 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.681020 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.681085 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:02.681048478 +0000 UTC m=+61.521671144 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.681152 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:02.68111367 +0000 UTC m=+61.521736336 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.747437 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.747507 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:10:58 crc kubenswrapper[4853]: I1122 07:10:58.747450 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.747651 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.747783 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:10:58 crc kubenswrapper[4853]: E1122 07:10:58.747855 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.041671 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909"} Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.056984 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.070580 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.084021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.105615 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.120976 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.136142 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.151144 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.166298 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.731820 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.752305 4853 scope.go:117] "RemoveContainer" containerID="c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f" Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.752509 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.759941 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.767896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.785721 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.812653 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.831991 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.839971 4853 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.842335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.842372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.842384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.842464 4853 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.856920 4853 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.857073 4853 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858162 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858334 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.858373 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.878626 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.894924 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.898781 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.905213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.905258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.905268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.905286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.905301 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.918135 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.923248 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.928181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.928237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.928249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.928272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.928286 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.941765 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.945373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.945436 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.945453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.945473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.945488 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.958469 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.962980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.963031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.963045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.963069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.963080 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.979296 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:10:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:10:59 crc kubenswrapper[4853]: E1122 07:10:59.979427 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.981723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.981791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.981806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.981823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:10:59 crc kubenswrapper[4853]: I1122 07:10:59.981835 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:10:59Z","lastTransitionTime":"2025-11-22T07:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.046632 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.047571 4853 scope.go:117] "RemoveContainer" containerID="c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f" Nov 22 07:11:00 crc kubenswrapper[4853]: E1122 07:11:00.047794 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.069725 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.084472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.084505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.084514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.084530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.084540 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.092335 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.108943 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.121989 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.137365 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.160112 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.182929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.186882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.186915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.186924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.186937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.186947 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.229594 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.245420 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.289365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.289402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.289412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.289427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.289436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.395212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.395262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.395274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.395290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.395303 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.498839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.498895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.498909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.498928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.498945 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.569002 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mlpz8"] Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.569356 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rvgxj"] Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.569549 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.569597 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.573387 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.573491 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.576894 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.577568 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.577765 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.580891 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.580969 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.585395 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.594231 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.601083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.601112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.601121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.601136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.601147 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.616152 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.633876 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.648673 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.680793 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-bin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-etc-kubernetes\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvk87\" (UniqueName: \"kubernetes.io/projected/c09200ba-013f-45e3-b581-8523557344b8-kube-api-access-cvk87\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-k8s-cni-cncf-io\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698544 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-netns\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c09200ba-013f-45e3-b581-8523557344b8-hosts-file\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-socket-dir-parent\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698665 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-multus-daemon-config\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698718 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-multus\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698808 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-cni-binary-copy\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698839 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-conf-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698862 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-kubelet\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-hostroot\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698922 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6dm\" (UniqueName: \"kubernetes.io/projected/dbbe3472-17cc-48dd-8e46-393b00149429-kube-api-access-ct6dm\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-system-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.698987 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-os-release\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.699082 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-cnibin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.699183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.699206 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-multus-certs\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.704666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.704695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.704704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.704718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.704728 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.738734 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.747335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:00 crc kubenswrapper[4853]: E1122 07:11:00.747465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.747904 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:00 crc kubenswrapper[4853]: E1122 07:11:00.747963 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.748003 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:00 crc kubenswrapper[4853]: E1122 07:11:00.748047 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.776832 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800196 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-multus\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-multus-daemon-config\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-cni-binary-copy\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800278 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-conf-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800294 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-kubelet\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-hostroot\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-system-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800353 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-multus\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800364 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-os-release\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-conf-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-kubelet\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800452 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct6dm\" (UniqueName: \"kubernetes.io/projected/dbbe3472-17cc-48dd-8e46-393b00149429-kube-api-access-ct6dm\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800496 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-cnibin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800524 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-multus-certs\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800560 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-etc-kubernetes\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvk87\" (UniqueName: \"kubernetes.io/projected/c09200ba-013f-45e3-b581-8523557344b8-kube-api-access-cvk87\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-k8s-cni-cncf-io\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800606 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-netns\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-bin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800636 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-socket-dir-parent\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c09200ba-013f-45e3-b581-8523557344b8-hosts-file\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800698 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c09200ba-013f-45e3-b581-8523557344b8-hosts-file\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-hostroot\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-etc-kubernetes\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-system-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800850 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-cnibin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-multus-certs\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801013 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-k8s-cni-cncf-io\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-cni-dir\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-run-netns\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-host-var-lib-cni-bin\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.800436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-os-release\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801065 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-multus-daemon-config\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801099 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dbbe3472-17cc-48dd-8e46-393b00149429-multus-socket-dir-parent\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.801165 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dbbe3472-17cc-48dd-8e46-393b00149429-cni-binary-copy\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.806662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.806706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.806725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.806763 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.806776 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.823991 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.832847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct6dm\" (UniqueName: \"kubernetes.io/projected/dbbe3472-17cc-48dd-8e46-393b00149429-kube-api-access-ct6dm\") pod \"multus-rvgxj\" (UID: \"dbbe3472-17cc-48dd-8e46-393b00149429\") " pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.842712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvk87\" (UniqueName: \"kubernetes.io/projected/c09200ba-013f-45e3-b581-8523557344b8-kube-api-access-cvk87\") pod \"node-resolver-mlpz8\" (UID: \"c09200ba-013f-45e3-b581-8523557344b8\") " pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.848990 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.865500 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.877266 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.885569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rvgxj" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.890248 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mlpz8" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.891237 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.911056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.911104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.911116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.911144 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.911158 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:00Z","lastTransitionTime":"2025-11-22T07:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.924572 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: W1122 07:11:00.927848 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc09200ba_013f_45e3_b581_8523557344b8.slice/crio-4df09a1de5f6a991529a635756868c73f29b3d0bc1cc0f5d3b8890f342bea6cd WatchSource:0}: Error finding container 4df09a1de5f6a991529a635756868c73f29b3d0bc1cc0f5d3b8890f342bea6cd: Status 404 returned error can't find the container with id 4df09a1de5f6a991529a635756868c73f29b3d0bc1cc0f5d3b8890f342bea6cd Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.945981 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.961841 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fflvd"] Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.962399 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ckn94"] Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.962607 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.963843 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.970648 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.971235 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.975593 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.976016 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.976758 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.976874 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.977234 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.977416 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqtsz"] Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.983046 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.985416 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.986912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.986969 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.986917 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.987045 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.987160 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.987296 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 22 07:11:00 crc kubenswrapper[4853]: I1122 07:11:00.987355 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.002481 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:00Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.016727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.016844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.016859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.016879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.016893 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.033151 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.064976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mlpz8" event={"ID":"c09200ba-013f-45e3-b581-8523557344b8","Type":"ContainerStarted","Data":"4df09a1de5f6a991529a635756868c73f29b3d0bc1cc0f5d3b8890f342bea6cd"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.065285 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.068780 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerStarted","Data":"f5dd7a2452ce21ccd142b48f8784923fce726c01cde53022237e557150e40435"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.083219 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.098130 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/476c875a-2b87-419a-8042-0ba059620fd8-rootfs\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104292 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104318 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104361 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkhrj\" (UniqueName: \"kubernetes.io/projected/476c875a-2b87-419a-8042-0ba059620fd8-kube-api-access-zkhrj\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104514 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104542 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-os-release\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104643 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104691 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104724 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-system-cni-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104796 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cnibin\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/476c875a-2b87-419a-8042-0ba059620fd8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104926 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/476c875a-2b87-419a-8042-0ba059620fd8-proxy-tls\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.104991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105014 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105125 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89zdr\" (UniqueName: \"kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjm2\" (UniqueName: \"kubernetes.io/projected/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-kube-api-access-fcjm2\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105212 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.105238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.114433 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.122055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.122094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.122107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.122125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.122137 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.130836 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.144827 4853 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.145655 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.160872 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.174269 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.186401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.200003 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.206801 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.206885 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-system-cni-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.206917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.206947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.206974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cnibin\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/476c875a-2b87-419a-8042-0ba059620fd8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-system-cni-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207046 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207075 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207113 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/476c875a-2b87-419a-8042-0ba059620fd8-proxy-tls\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207124 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cnibin\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207138 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207179 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207214 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207238 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207285 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207289 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjm2\" (UniqueName: \"kubernetes.io/projected/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-kube-api-access-fcjm2\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207444 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89zdr\" (UniqueName: \"kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207502 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/476c875a-2b87-419a-8042-0ba059620fd8-rootfs\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207404 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207552 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207597 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkhrj\" (UniqueName: \"kubernetes.io/projected/476c875a-2b87-419a-8042-0ba059620fd8-kube-api-access-zkhrj\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207790 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207810 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207831 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207942 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207944 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-os-release\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.207997 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208005 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-binary-copy\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208005 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/476c875a-2b87-419a-8042-0ba059620fd8-mcd-auth-proxy-config\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208186 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/476c875a-2b87-419a-8042-0ba059620fd8-rootfs\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-os-release\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208421 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208459 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.208684 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.211212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.212400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/476c875a-2b87-419a-8042-0ba059620fd8-proxy-tls\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.220477 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.223727 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89zdr\" (UniqueName: \"kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr\") pod \"ovnkube-node-pqtsz\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.223851 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkhrj\" (UniqueName: \"kubernetes.io/projected/476c875a-2b87-419a-8042-0ba059620fd8-kube-api-access-zkhrj\") pod \"machine-config-daemon-fflvd\" (UID: \"476c875a-2b87-419a-8042-0ba059620fd8\") " pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.224484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.224513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.224524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.224542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.224555 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.227997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjm2\" (UniqueName: \"kubernetes.io/projected/6f60b37f-d6f5-4145-a3e7-cfe92fca6d77-kube-api-access-fcjm2\") pod \"multus-additional-cni-plugins-ckn94\" (UID: \"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\") " pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.236182 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.249352 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.262405 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.274335 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.290314 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.296430 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.306494 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: W1122 07:11:01.309443 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476c875a_2b87_419a_8042_0ba059620fd8.slice/crio-27dc6098de83356bd01141ade45472adb01d8e83c887df337918791fa7685bc0 WatchSource:0}: Error finding container 27dc6098de83356bd01141ade45472adb01d8e83c887df337918791fa7685bc0: Status 404 returned error can't find the container with id 27dc6098de83356bd01141ade45472adb01d8e83c887df337918791fa7685bc0 Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.314309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ckn94" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.317452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.323260 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.326977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.327013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.327024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.327041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.327052 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: W1122 07:11:01.333451 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f60b37f_d6f5_4145_a3e7_cfe92fca6d77.slice/crio-22d1a1cb9a39936590e70057622c2412da9a2c4c146c3aab6d34d73a3247c79d WatchSource:0}: Error finding container 22d1a1cb9a39936590e70057622c2412da9a2c4c146c3aab6d34d73a3247c79d: Status 404 returned error can't find the container with id 22d1a1cb9a39936590e70057622c2412da9a2c4c146c3aab6d34d73a3247c79d Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.337580 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:01 crc kubenswrapper[4853]: W1122 07:11:01.343876 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893f7e02_580a_4093_ab42_ea73ffffcfe6.slice/crio-2abee242a2ef10fc8ab292ffe4ace663b8351bea615a8b07e23d54fa800f7783 WatchSource:0}: Error finding container 2abee242a2ef10fc8ab292ffe4ace663b8351bea615a8b07e23d54fa800f7783: Status 404 returned error can't find the container with id 2abee242a2ef10fc8ab292ffe4ace663b8351bea615a8b07e23d54fa800f7783 Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.434538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.434604 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.434617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.434637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.434654 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.537397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.537438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.537449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.537466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.537478 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.639319 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.639359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.639369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.639383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.639394 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.741779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.741825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.741837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.741858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.741871 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.844655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.844699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.844708 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.844722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.844731 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.947798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.947861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.947873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.947891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:01 crc kubenswrapper[4853]: I1122 07:11:01.947902 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:01Z","lastTransitionTime":"2025-11-22T07:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.050968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.051028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.051042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.051099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.051125 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.073463 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerStarted","Data":"5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.074349 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"2abee242a2ef10fc8ab292ffe4ace663b8351bea615a8b07e23d54fa800f7783"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.075419 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerStarted","Data":"22d1a1cb9a39936590e70057622c2412da9a2c4c146c3aab6d34d73a3247c79d"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.076597 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"27dc6098de83356bd01141ade45472adb01d8e83c887df337918791fa7685bc0"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.154026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.154076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.154089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.154107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.154122 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.257534 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.257587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.257600 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.257621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.257635 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.360356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.360404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.360414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.360432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.360442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.463330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.463383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.463397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.463419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.463435 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.477772 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-9nx9m"] Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.478323 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.480627 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.480930 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.481034 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.481103 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.501811 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.517488 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.531130 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.545717 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.558785 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.566692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.566731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.566740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.566773 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.566784 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.572671 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.586960 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.598330 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.617560 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.625937 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.626281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-serviceca\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.626365 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4p7q\" (UniqueName: \"kubernetes.io/projected/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-kube-api-access-c4p7q\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.626432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-host\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.626725 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:10.626683853 +0000 UTC m=+69.467306499 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.630565 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.646239 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.658925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.669666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.669702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.669712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.669726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.669739 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.671428 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.682936 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.695586 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727683 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-serviceca\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.727818 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.727931 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:10.727908787 +0000 UTC m=+69.568531413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.727843 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4p7q\" (UniqueName: \"kubernetes.io/projected/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-kube-api-access-c4p7q\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.727962 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728004 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728019 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.728030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-host\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728093 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:10.728073481 +0000 UTC m=+69.568696107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728098 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728152 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.728158 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-host\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728173 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728109 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728246 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:10.728225765 +0000 UTC m=+69.568848591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.728274 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:10.728263186 +0000 UTC m=+69.568886052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.728997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-serviceca\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.746176 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4p7q\" (UniqueName: \"kubernetes.io/projected/120cba0a-6e0b-40b3-8c15-46e7ff7c8641-kube-api-access-c4p7q\") pod \"node-ca-9nx9m\" (UID: \"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\") " pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.747080 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.747217 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.747304 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.747300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.747461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:02 crc kubenswrapper[4853]: E1122 07:11:02.747633 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.773005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.773066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.773086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.773110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.773126 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.793339 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-9nx9m" Nov 22 07:11:02 crc kubenswrapper[4853]: W1122 07:11:02.807479 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod120cba0a_6e0b_40b3_8c15_46e7ff7c8641.slice/crio-555fb9004de22aae4f93b780dcb7ef096bab261536890b630eb86a187aa7c6ad WatchSource:0}: Error finding container 555fb9004de22aae4f93b780dcb7ef096bab261536890b630eb86a187aa7c6ad: Status 404 returned error can't find the container with id 555fb9004de22aae4f93b780dcb7ef096bab261536890b630eb86a187aa7c6ad Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.876312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.876357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.876370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.876389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.876403 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.981657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.981739 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.981765 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.981782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:02 crc kubenswrapper[4853]: I1122 07:11:02.981795 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:02Z","lastTransitionTime":"2025-11-22T07:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.082299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerStarted","Data":"3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084212 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084224 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.084732 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mlpz8" event={"ID":"c09200ba-013f-45e3-b581-8523557344b8","Type":"ContainerStarted","Data":"3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.086370 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5" exitCode=0 Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.086439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.089563 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9nx9m" event={"ID":"120cba0a-6e0b-40b3-8c15-46e7ff7c8641","Type":"ContainerStarted","Data":"555fb9004de22aae4f93b780dcb7ef096bab261536890b630eb86a187aa7c6ad"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.092857 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.102741 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.114639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.129111 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.142823 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.154769 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.165929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.178395 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.187366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.187424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.187437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.187457 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.187471 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.195433 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.211514 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.228194 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.246045 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.263804 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.282096 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.289891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.289945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.289956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.289976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.289989 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.306278 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.320152 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.338464 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.354150 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.367280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.382548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.392332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.392379 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.392391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.392410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.392422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.397295 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.411388 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.426723 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.450925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.464020 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.483737 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.495596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.495646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.495657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.495675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.495686 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.499474 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.513662 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.528096 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.544070 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.558122 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.598449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.598499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.598513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.598533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.598574 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.707851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.707895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.707910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.707928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.707938 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.811083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.811129 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.811143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.811168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.811190 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.914854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.914902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.914913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.914936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:03 crc kubenswrapper[4853]: I1122 07:11:03.914950 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:03Z","lastTransitionTime":"2025-11-22T07:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.018380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.018462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.018486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.018511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.018525 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.108379 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.121165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.121215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.121229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.121251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.121264 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.225971 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.226017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.226026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.226042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.226052 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.329369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.329447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.329469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.329500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.329520 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.432316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.432361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.432373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.432391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.432403 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.538698 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.538778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.538793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.538816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.538832 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.641301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.641354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.641364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.641382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.641393 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.744134 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.744189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.744200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.744221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.744235 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.747361 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.747397 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.747407 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:04 crc kubenswrapper[4853]: E1122 07:11:04.747486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:04 crc kubenswrapper[4853]: E1122 07:11:04.747577 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:04 crc kubenswrapper[4853]: E1122 07:11:04.747740 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.846438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.846513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.846537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.846570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.846595 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.949553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.949607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.949617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.949638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:04 crc kubenswrapper[4853]: I1122 07:11:04.949651 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:04Z","lastTransitionTime":"2025-11-22T07:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.052184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.052232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.052247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.052268 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.052279 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.113228 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.114898 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-9nx9m" event={"ID":"120cba0a-6e0b-40b3-8c15-46e7ff7c8641","Type":"ContainerStarted","Data":"c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.125556 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.150890 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.154911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.154972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.154985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.155004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.155015 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.175990 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.205454 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.220793 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.235047 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.253623 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.257674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.257710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.257721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.257738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.257765 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.268469 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.282125 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.294711 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.306863 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.321853 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.343114 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.356712 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.360394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.360438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.360451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.360472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.360486 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.371853 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.386240 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.398082 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.419131 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.431081 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.446618 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.459677 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.464273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.464503 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.464538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.464566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.464598 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.479220 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.495147 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.509448 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.529793 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.543678 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.556946 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.568226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.568301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.568316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.568341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.568354 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.572570 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.586924 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.608039 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.671305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.671500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.671570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.671649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.671719 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.766314 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.774970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.775020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.775034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.775058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.775073 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.788581 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.802807 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.816562 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.831460 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.848088 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.864978 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.878472 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.878527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.878538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.878559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.878574 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.879370 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.900258 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.911868 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.924904 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.938365 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.950620 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.964404 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.977933 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.981645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.981676 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.981685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.981704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:05 crc kubenswrapper[4853]: I1122 07:11:05.981716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:05Z","lastTransitionTime":"2025-11-22T07:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.084525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.084585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.084601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.084627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.084648 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.118323 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e" exitCode=0 Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.118416 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.127245 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.140149 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.157057 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.173401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.188785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.188857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.188870 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.188892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.188904 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.194816 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.213492 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.232955 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.259232 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.274641 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.290935 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.291946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.292013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.292028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.292054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.292073 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.307439 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.329279 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.342168 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.362833 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.376926 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.390158 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.395117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.395157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.395167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.395182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.395193 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.498622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.498674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.498689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.498710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.498728 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.602696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.602781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.602794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.602811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.602826 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.706415 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.706476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.706487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.706507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.706518 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.747089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.747105 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:06 crc kubenswrapper[4853]: E1122 07:11:06.747258 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.747105 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:06 crc kubenswrapper[4853]: E1122 07:11:06.747394 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:06 crc kubenswrapper[4853]: E1122 07:11:06.747429 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.808619 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.809014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.809028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.809049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.809065 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.912536 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.912584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.912597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.912619 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:06 crc kubenswrapper[4853]: I1122 07:11:06.912639 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:06Z","lastTransitionTime":"2025-11-22T07:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.015161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.015200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.015210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.015228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.015240 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.117787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.117827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.117837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.117855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.117865 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.220820 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.220863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.220872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.220891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.220904 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.324361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.324423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.324433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.324451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.324461 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.427347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.427394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.427405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.427425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.427436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.529737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerStarted","Data":"db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.531227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.531266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.531292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.531317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.531336 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.534212 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.618590 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.630108 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.634186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.634224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.634237 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.634254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.634267 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.635817 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.654293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.671116 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.690162 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.737954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.738072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.738087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.738110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.738123 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.751780 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.795565 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.817511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.829586 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.841646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.841695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.841705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.841722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.841734 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.846644 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.859114 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.872522 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.887107 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.901410 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.914015 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.927652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.944293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.944344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.944356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.944377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.944394 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:07Z","lastTransitionTime":"2025-11-22T07:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.968351 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:11:07 crc kubenswrapper[4853]: I1122 07:11:07.985200 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.000218 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:07Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.028972 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.041910 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.047241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.047307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.047322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.047343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.047357 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.055009 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.075488 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.087900 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.100805 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.121315 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.137270 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.149740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.149792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.149803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.149816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.149827 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.161356 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.179070 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.193261 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.207315 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.224270 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.243004 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.252625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.252658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.252667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.252684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.252695 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.356388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.356433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.356445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.356470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.356481 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.459492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.459577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.459590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.459610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.459622 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.555393 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.562322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.562389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.562413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.562442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.562465 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.574410 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.591362 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.607883 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.626969 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.641052 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.656253 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.665177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.665233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.665244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.665267 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.665279 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.671374 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.691507 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.716094 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.734127 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.747607 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.747708 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:08 crc kubenswrapper[4853]: E1122 07:11:08.747785 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.747871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:08 crc kubenswrapper[4853]: E1122 07:11:08.747945 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:08 crc kubenswrapper[4853]: E1122 07:11:08.748149 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.750511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.762534 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.768066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.768119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.768131 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.768156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.768170 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.780986 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.796189 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.819974 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.870964 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.871008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.871017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.871033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.871045 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.975067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.975124 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.975135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.975153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:08 crc kubenswrapper[4853]: I1122 07:11:08.975163 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:08Z","lastTransitionTime":"2025-11-22T07:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.078092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.078145 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.078156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.078180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.078197 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.185330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.185383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.185395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.185414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.185427 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.288558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.288628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.288646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.288704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.288722 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.391895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.391944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.391963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.391984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.391998 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.494468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.494524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.494541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.494563 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.494582 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.544527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.597471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.597515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.597525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.597543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.597554 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.700660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.700743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.700814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.700845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.700870 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.803906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.803988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.804004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.804022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.804032 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.907370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.907432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.907452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.907478 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:09 crc kubenswrapper[4853]: I1122 07:11:09.907492 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:09Z","lastTransitionTime":"2025-11-22T07:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.009795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.009832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.009840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.009857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.009867 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.112557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.112602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.112613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.112630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.112643 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.138693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.138740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.138766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.138789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.138803 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.154968 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.159561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.159616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.159630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.159655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.159674 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.173999 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.177839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.177882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.177895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.177915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.177928 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.190123 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.193844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.193892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.193904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.193920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.193929 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.206725 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.210729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.210798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.210808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.210835 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.210847 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.225078 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.225187 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.227504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.227565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.227583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.227687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.227704 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.330162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.330209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.330220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.330242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.330254 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.432666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.433009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.433020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.433038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.433048 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.536470 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.536524 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.536543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.536567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.536584 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.551741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.551833 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.553965 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.554038 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022" exitCode=0 Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.571370 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.589601 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.603960 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.624038 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.639692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.639743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.639775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.639796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.639808 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.641648 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.653032 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.653310 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:26.653281986 +0000 UTC m=+85.493904612 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.657652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.676056 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.691119 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.711455 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.724396 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.739292 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.742241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.742290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.742302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.742321 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.742332 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.747101 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.747205 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.747108 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.747238 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.747357 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.747446 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.750964 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.753839 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.753895 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.753939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.753955 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.753971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754029 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:26.754011376 +0000 UTC m=+85.594634012 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754087 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754095 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754123 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754129 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:26.754117429 +0000 UTC m=+85.594740075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754140 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754095 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754174 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754190 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754207 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:26.754186261 +0000 UTC m=+85.594809047 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:10 crc kubenswrapper[4853]: E1122 07:11:10.754232 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:26.754218322 +0000 UTC m=+85.594841178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.762153 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.774980 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.786848 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.799216 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.845766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.845817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.845829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.845853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.845868 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.948960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.949363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.949376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.949414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:10 crc kubenswrapper[4853]: I1122 07:11:10.949428 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:10Z","lastTransitionTime":"2025-11-22T07:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.052816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.052861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.052872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.052893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.052907 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.155656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.155712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.155726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.155781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.155813 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.259151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.259200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.259211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.259230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.259242 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.365957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.366003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.366012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.366031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.366042 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.468881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.468940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.468951 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.468972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.468985 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.561439 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a" exitCode=0 Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.561511 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.572232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.572311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.572336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.572376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.572400 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.584324 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.600302 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.614652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.632134 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.650762 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.668038 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.675038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.675102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.675117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.675139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.675159 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.684564 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.698708 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.719564 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.733049 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.752884 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.770945 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.777530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.777566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.777575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.777592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.777602 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.788798 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.804525 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.817774 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.831318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.880481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.880515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.880526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.880543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.880553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.983621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.983667 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.983691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.983717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:11 crc kubenswrapper[4853]: I1122 07:11:11.983734 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:11Z","lastTransitionTime":"2025-11-22T07:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.086597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.086648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.086659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.086681 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.086696 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.189902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.189960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.189975 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.189996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.190008 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.293701 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.293766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.293776 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.293800 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.293811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.397609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.397672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.397690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.397713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.397734 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.501013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.501075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.501093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.501119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.501132 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.603953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.604003 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.604014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.604033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.604045 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.707822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.707872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.707880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.707898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.707914 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.747875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.747936 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:12 crc kubenswrapper[4853]: E1122 07:11:12.748121 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.748178 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:12 crc kubenswrapper[4853]: E1122 07:11:12.748987 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:12 crc kubenswrapper[4853]: E1122 07:11:12.749051 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.811273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.811314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.811324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.811342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.811354 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.914793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.914848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.914868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.914894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:12 crc kubenswrapper[4853]: I1122 07:11:12.914910 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:12Z","lastTransitionTime":"2025-11-22T07:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.017866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.017913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.017926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.017945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.017957 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.121166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.121733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.121872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.121949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.122013 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.225400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.225444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.225453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.225473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.225491 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.328170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.328221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.328234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.328256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.328272 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.431778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.431827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.431841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.431858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.431871 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.535047 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.535110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.535132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.535156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.535174 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.571819 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0" exitCode=0 Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.571946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.578940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.597319 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.616169 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.638188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.638246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.638261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.638285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.638297 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.647114 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.669556 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.691177 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.709054 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.727004 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.734929 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4"] Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.735562 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.737263 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.738315 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.741002 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.741062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.741082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.741110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.741132 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.747288 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.767283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.782106 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.789175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.789259 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.789288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.789315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckff\" (UniqueName: \"kubernetes.io/projected/81cf3334-f910-4d46-be00-b3cd66ba8ed4-kube-api-access-6ckff\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.803412 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.815083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.826035 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.839111 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.842903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.842951 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.842961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.842977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.842988 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.851254 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.865487 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.882219 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.890100 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckff\" (UniqueName: \"kubernetes.io/projected/81cf3334-f910-4d46-be00-b3cd66ba8ed4-kube-api-access-6ckff\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.890220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.890297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.890450 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.891634 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.891979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.899573 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/81cf3334-f910-4d46-be00-b3cd66ba8ed4-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.911723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckff\" (UniqueName: \"kubernetes.io/projected/81cf3334-f910-4d46-be00-b3cd66ba8ed4-kube-api-access-6ckff\") pod \"ovnkube-control-plane-749d76644c-nhlw4\" (UID: \"81cf3334-f910-4d46-be00-b3cd66ba8ed4\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.913687 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.933015 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.947367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.947414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.947424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.947441 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.947452 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:13Z","lastTransitionTime":"2025-11-22T07:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.949589 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.965251 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:13 crc kubenswrapper[4853]: I1122 07:11:13.990639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.009115 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.022522 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.050167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.050245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.050256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.050274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.050286 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.052139 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.053224 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.066320 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.080363 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.098739 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.113540 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.127283 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.140131 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.153050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.153125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.153141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.153163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.153192 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.156280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.170888 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.256239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.256291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.256304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.256325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.256342 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: W1122 07:11:14.318885 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81cf3334_f910_4d46_be00_b3cd66ba8ed4.slice/crio-271459c2dab1e8a60f1031f1e456e8f50d12564a7fe493230d371a6cd8adca1b WatchSource:0}: Error finding container 271459c2dab1e8a60f1031f1e456e8f50d12564a7fe493230d371a6cd8adca1b: Status 404 returned error can't find the container with id 271459c2dab1e8a60f1031f1e456e8f50d12564a7fe493230d371a6cd8adca1b Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.379717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.379799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.379814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.379834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.379847 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.484019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.484081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.484095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.484118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.484134 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.584707 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" event={"ID":"81cf3334-f910-4d46-be00-b3cd66ba8ed4","Type":"ContainerStarted","Data":"271459c2dab1e8a60f1031f1e456e8f50d12564a7fe493230d371a6cd8adca1b"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.586100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.586155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.586166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.586185 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.586196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.689082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.689147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.689166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.689193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.689214 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.747430 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.747481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.747458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:14 crc kubenswrapper[4853]: E1122 07:11:14.747608 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:14 crc kubenswrapper[4853]: E1122 07:11:14.747720 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:14 crc kubenswrapper[4853]: E1122 07:11:14.747904 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.748650 4853 scope.go:117] "RemoveContainer" containerID="c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.792078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.792162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.792178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.792200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.792217 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.895693 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.895774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.895798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.895818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.895829 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.998984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.999080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.999093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.999119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:14 crc kubenswrapper[4853]: I1122 07:11:14.999134 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:14Z","lastTransitionTime":"2025-11-22T07:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.101388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.101438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.101449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.101468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.101480 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.203921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.203990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.204005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.204026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.204044 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.227021 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pd6gs"] Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.232942 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: E1122 07:11:15.233076 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.250930 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.270847 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.301327 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.304585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.304664 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn4t\" (UniqueName: \"kubernetes.io/projected/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-kube-api-access-bzn4t\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.306518 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.306547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.306557 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.306577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.306589 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.335288 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.361035 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.377879 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.393232 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.406163 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzn4t\" (UniqueName: \"kubernetes.io/projected/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-kube-api-access-bzn4t\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.406255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: E1122 07:11:15.406391 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:15 crc kubenswrapper[4853]: E1122 07:11:15.406471 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:15.906448938 +0000 UTC m=+74.747071564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.408688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.408719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.408732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.408774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.408790 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.411055 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.425039 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.425217 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzn4t\" (UniqueName: \"kubernetes.io/projected/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-kube-api-access-bzn4t\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.442640 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.457089 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.480452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.502272 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.511663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.511711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.511721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.511738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.511765 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.520865 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.537449 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.553639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.569471 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.582059 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.589808 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" event={"ID":"81cf3334-f910-4d46-be00-b3cd66ba8ed4","Type":"ContainerStarted","Data":"dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.593403 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.596680 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.600659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerStarted","Data":"9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.615234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.615274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.615288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.615307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.615321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.718253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.718289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.718299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.718315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.718327 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.762720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.784469 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.798029 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.813287 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.821568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.821619 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.821628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.821795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.821806 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.828545 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.849925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.866546 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.891543 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.913054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:15 crc kubenswrapper[4853]: E1122 07:11:15.913241 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:15 crc kubenswrapper[4853]: E1122 07:11:15.913316 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:16.913296275 +0000 UTC m=+75.753918901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.914996 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.924356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.924407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.924421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.924444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.924456 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:15Z","lastTransitionTime":"2025-11-22T07:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.937861 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.952592 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.967881 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.981839 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:15 crc kubenswrapper[4853]: I1122 07:11:15.994828 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.007046 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.019589 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.027223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.027277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.027290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.027308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.027321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.037555 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.053030 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.128968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.129017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.129086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.129139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.129157 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.231495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.231542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.231552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.231569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.231579 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.334651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.334699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.334712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.334732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.334764 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.438798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.438867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.438878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.438899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.438911 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.543901 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.543967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.543986 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.544011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.544031 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.609618 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.610063 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.624163 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.642124 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.646837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.646891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.646903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.646921 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.646931 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.656899 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.670089 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.684493 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.698811 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.714859 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.736178 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.747208 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.747262 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.747306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.747382 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.747396 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.747507 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.747577 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.747611 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.749072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.749102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.749119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.749136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.749148 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.751558 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.766332 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.781233 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.794516 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.809310 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.819694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.833769 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.845569 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.851785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.851821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.851830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.851846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.851857 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.867779 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.881623 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.897065 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.915464 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.924899 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.925059 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:16 crc kubenswrapper[4853]: E1122 07:11:16.925112 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:18.925096616 +0000 UTC m=+77.765719242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.931292 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.950093 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.953717 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.953775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.953787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.953808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.953820 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:16Z","lastTransitionTime":"2025-11-22T07:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.964584 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:16 crc kubenswrapper[4853]: I1122 07:11:16.991688 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.008864 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.029852 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.044564 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.056292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.056329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.056437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.056458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.056468 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.063916 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.077543 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.091489 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.103898 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.115855 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.132560 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.149633 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.159443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.159514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.159529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.159553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.159566 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.169601 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.186862 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.262301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.262347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.262358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.262376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.262387 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.366288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.366352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.366366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.366387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.366402 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.469678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.469728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.469737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.469772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.469784 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.573385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.573458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.573481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.573513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.573537 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.617771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" event={"ID":"81cf3334-f910-4d46-be00-b3cd66ba8ed4","Type":"ContainerStarted","Data":"86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.639332 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.660822 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.676479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.676533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.676547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.676566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.676579 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.687931 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.703213 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.720311 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.738092 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.761517 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.778701 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.779199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.779239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.779253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.779277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.779296 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.795036 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.811281 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.827460 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.858140 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.875850 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.882065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.882108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.882121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.882139 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.882154 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.895979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.911848 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.928231 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.945156 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.959673 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.979452 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.984987 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.985035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.985048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.985070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.985085 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:17Z","lastTransitionTime":"2025-11-22T07:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:17 crc kubenswrapper[4853]: I1122 07:11:17.994701 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:17Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.012963 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.030072 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.047334 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.061961 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.076200 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.091866 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.092011 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.092086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.092102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.092128 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.092144 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.107511 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.122280 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.140844 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.152885 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.163796 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.176566 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.189552 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.194504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.194554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.194564 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.194584 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.194596 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.201598 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.224478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.236602 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.297956 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.298019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.298031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.298051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.298065 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.400449 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.400485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.400514 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.400531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.400541 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.504260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.504361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.504396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.504437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.504466 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.608625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.608674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.608687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.608706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.608719 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.711497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.711555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.711568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.711589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.711603 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.747466 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.747466 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.747519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.747546 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.748139 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.748240 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.748183 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.748367 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.814243 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.814292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.814303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.814322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.814335 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.917373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.917431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.917479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.917508 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.917532 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:18Z","lastTransitionTime":"2025-11-22T07:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:18 crc kubenswrapper[4853]: I1122 07:11:18.950450 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.950612 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:18 crc kubenswrapper[4853]: E1122 07:11:18.950685 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:22.950663034 +0000 UTC m=+81.791285660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.019655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.019690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.019699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.019719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.019730 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.122690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.122843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.122876 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.122911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.122931 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.226152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.226233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.226257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.226288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.226309 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.329019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.329076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.329092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.329115 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.329131 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.432601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.432644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.432654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.432671 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.432683 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.535993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.536075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.536102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.536131 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.536151 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.629820 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1" exitCode=0 Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.629928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.639805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.639877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.639896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.639947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.639970 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.653570 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.669705 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.686212 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.711286 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.728362 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.742609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.742660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.742677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.742706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.742725 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.745119 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.764264 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.781219 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.801056 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.820788 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.837823 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.846432 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.846486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.846499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.846522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.846537 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.873122 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.888656 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.904827 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.923374 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.940478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.949522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.949731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.949834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.949977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.950048 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:19Z","lastTransitionTime":"2025-11-22T07:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.959704 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:19 crc kubenswrapper[4853]: I1122 07:11:19.973288 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.053468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.053538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.053559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.053589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.053610 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.156527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.156598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.156624 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.156663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.156684 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.259233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.259284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.259299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.259320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.259334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.361894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.361933 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.361944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.361963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.361976 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.385154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.385217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.385230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.385250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.385264 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.402854 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.407438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.407497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.407517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.407538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.407552 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.419879 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.424085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.424272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.424350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.424427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.424507 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.438090 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.442025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.442124 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.442194 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.442261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.442328 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.453686 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.458437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.458487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.458504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.458532 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.458549 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.471700 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.471845 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.473718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.473775 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.473790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.473811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.473825 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.577016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.577074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.577086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.577110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.577129 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.636759 4853 generic.go:334] "Generic (PLEG): container finished" podID="6f60b37f-d6f5-4145-a3e7-cfe92fca6d77" containerID="dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f" exitCode=0 Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.636791 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerDied","Data":"dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.651151 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681589 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.681887 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.704595 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.727869 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.744435 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.748012 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.748155 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.748598 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.751795 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.751839 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.751963 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.752569 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:20 crc kubenswrapper[4853]: E1122 07:11:20.752648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.770938 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.785622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.785682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.785697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.785722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.785736 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.793780 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.809266 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.821384 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.841611 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.853229 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.869095 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.885190 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.888852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.888884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.888895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.888915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.888928 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.900263 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.911308 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.931140 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.943969 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.963001 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.992429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.992493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.992509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.992534 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:20 crc kubenswrapper[4853]: I1122 07:11:20.992550 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:20Z","lastTransitionTime":"2025-11-22T07:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.095995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.096072 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.096086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.096114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.096137 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.198330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.198394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.198408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.198428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.198442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.301412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.301477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.301494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.301517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.301534 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.323916 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.323982 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.325085 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.344877 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" probeResult="failure" output="" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.351962 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.359800 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.370706 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.384241 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.404561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.404607 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.404657 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.404684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.404702 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.405613 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.415463 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.428609 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.444905 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.462108 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.477061 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.490200 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.503150 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.507008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.507038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.507048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.507067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.507078 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.520638 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.540688 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.560791 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.576881 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.600590 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.609697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.609780 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.609796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.609820 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.609835 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.615557 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.628274 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.642239 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.656209 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" probeResult="failure" output="" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.666441 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.679904 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.695819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.709174 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.712020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.712056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.712068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.712091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.712105 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.728498 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.741534 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.758190 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.771517 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.784923 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.799082 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.814965 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.815150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.815192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.815203 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.815233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.815249 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.829510 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.843785 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.856846 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.876904 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.887611 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918107 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918368 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.918426 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:21Z","lastTransitionTime":"2025-11-22T07:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:21 crc kubenswrapper[4853]: I1122 07:11:21.938656 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.022033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.022520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.022538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.022556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.022568 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.125486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.125526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.125540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.125560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.125573 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.228337 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.228416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.228428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.228447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.228458 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.331712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.331816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.331835 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.331862 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.331878 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.435015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.435064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.435073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.435092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.435106 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.538225 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.538282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.538293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.538315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.538331 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.641892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.641939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.641949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.641969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.641983 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.649285 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" event={"ID":"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77","Type":"ContainerStarted","Data":"bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.652333 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/0.log" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.655568 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1" exitCode=1 Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.655635 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.656571 4853 scope.go:117] "RemoveContainer" containerID="f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.672393 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.689770 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.703768 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.727481 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.743990 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.744735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.744807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.744825 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.744849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.744864 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.746976 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.747003 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.747040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:22 crc kubenswrapper[4853]: E1122 07:11:22.747118 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:22 crc kubenswrapper[4853]: E1122 07:11:22.747218 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:22 crc kubenswrapper[4853]: E1122 07:11:22.747290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.747467 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:22 crc kubenswrapper[4853]: E1122 07:11:22.747677 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.759109 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.778228 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.794504 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.809313 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.831625 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.845344 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.846719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.846788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.846803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.846823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.846834 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.859623 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.874290 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.889773 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.905015 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.920574 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.934810 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.949402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.949458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.949469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.949489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.949508 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:22Z","lastTransitionTime":"2025-11-22T07:11:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.950199 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.968767 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:22 crc kubenswrapper[4853]: I1122 07:11:22.980718 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.002294 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 07:11:22.106463 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:22.106491 6240 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 07:11:22.106501 6240 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1122 07:11:22.106526 6240 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:22.106583 6240 factory.go:656] Stopping watch factory\\\\nI1122 07:11:22.106603 6240 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:22.106613 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:22.106622 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 07:11:22.106630 6240 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:22.106639 6240 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 07:11:22.106737 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:22.106800 6240 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.002573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:23 crc kubenswrapper[4853]: E1122 07:11:23.002810 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:23 crc kubenswrapper[4853]: E1122 07:11:23.002931 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:31.002899062 +0000 UTC m=+89.843521838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.014703 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.027949 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.042418 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.052435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.052498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.052511 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.052531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.052549 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.058169 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.071243 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.084798 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.099979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.112639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.134397 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.147244 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.155219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.155256 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.155269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.155290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.155304 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.160580 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.174020 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.190925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.206690 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.218615 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.258336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.258377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.258387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.258406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.258417 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.360405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.360435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.360445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.360458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.360469 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.463219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.463301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.463315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.463332 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.463345 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.567091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.567196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.567218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.567251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.567277 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.671311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.671402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.671427 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.671458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.671487 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.775198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.775261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.775283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.775306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.775324 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.885575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.885625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.885674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.885728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.885877 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.990566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.990601 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.990610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.990649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:23 crc kubenswrapper[4853]: I1122 07:11:23.990662 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:23Z","lastTransitionTime":"2025-11-22T07:11:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.093491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.093533 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.093545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.093566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.093580 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.196421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.196469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.196479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.196501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.196514 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.299830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.299896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.299910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.299932 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.300264 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.403192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.403236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.403249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.403266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.403278 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.506663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.506712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.506727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.506773 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.506789 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.609638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.609675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.609686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.609704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.609717 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.672374 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/0.log" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.676878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.677341 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.693896 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.708412 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.712193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.712244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.712261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.712285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.712300 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.724187 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.740179 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.747370 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:24 crc kubenswrapper[4853]: E1122 07:11:24.747481 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.747835 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:24 crc kubenswrapper[4853]: E1122 07:11:24.747932 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.748034 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.748088 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:24 crc kubenswrapper[4853]: E1122 07:11:24.748215 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:24 crc kubenswrapper[4853]: E1122 07:11:24.748413 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.756075 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.774911 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.789540 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.811116 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.814941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.815000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.815012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.815030 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.815047 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.829893 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.848213 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 07:11:22.106463 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:22.106491 6240 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 07:11:22.106501 6240 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1122 07:11:22.106526 6240 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:22.106583 6240 factory.go:656] Stopping watch factory\\\\nI1122 07:11:22.106603 6240 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:22.106613 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:22.106622 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 07:11:22.106630 6240 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:22.106639 6240 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 07:11:22.106737 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:22.106800 6240 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.860987 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.877963 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.891286 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.906743 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.918207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.918244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.918254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.918277 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.918292 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:24Z","lastTransitionTime":"2025-11-22T07:11:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.921461 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.934819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.948270 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:24 crc kubenswrapper[4853]: I1122 07:11:24.963870 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:24Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.021788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.021845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.021856 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.021884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.021897 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.124877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.124928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.124941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.124962 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.124978 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.228349 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.228411 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.228424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.228442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.228454 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.331715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.331771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.331783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.331802 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.331815 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.434552 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.434944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.435046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.435153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.435238 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.538374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.538806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.538829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.538845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.538856 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.641589 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.641641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.641654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.641673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.641685 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.682431 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/1.log" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.683298 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/0.log" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.686424 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b" exitCode=1 Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.686484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.686532 4853 scope.go:117] "RemoveContainer" containerID="f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.687506 4853 scope.go:117] "RemoveContainer" containerID="fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b" Nov 22 07:11:25 crc kubenswrapper[4853]: E1122 07:11:25.687731 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.709454 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.722808 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.735164 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.745041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.745100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.745117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.745143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.745157 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.748781 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.768199 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.785552 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.797931 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.818467 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.833344 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.847526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.847570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.847583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.847602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.847614 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.853717 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 07:11:22.106463 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:22.106491 6240 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 07:11:22.106501 6240 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1122 07:11:22.106526 6240 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:22.106583 6240 factory.go:656] Stopping watch factory\\\\nI1122 07:11:22.106603 6240 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:22.106613 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:22.106622 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 07:11:22.106630 6240 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:22.106639 6240 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 07:11:22.106737 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:22.106800 6240 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.865909 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.880671 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.891649 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.904543 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.919349 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.933343 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.946021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.950093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.950137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.950148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.950167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.950178 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:25Z","lastTransitionTime":"2025-11-22T07:11:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.961537 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.977006 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:25 crc kubenswrapper[4853]: I1122 07:11:25.989223 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:25Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.009536 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f92a80784d0788a478ccde73bcf34dd9fc3e42dd5005138293506125bd3924f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"message\\\":\\\"ndler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1122 07:11:22.106463 6240 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:11:22.106491 6240 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1122 07:11:22.106501 6240 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1122 07:11:22.106526 6240 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:11:22.106583 6240 factory.go:656] Stopping watch factory\\\\nI1122 07:11:22.106603 6240 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:11:22.106613 6240 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1122 07:11:22.106622 6240 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1122 07:11:22.106630 6240 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1122 07:11:22.106639 6240 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1122 07:11:22.106737 6240 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:22.106800 6240 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.022371 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.035159 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.049980 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.052771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.052803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.052814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.052831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.052845 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.067406 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.079475 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.089931 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.110879 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.134240 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.153851 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.155245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.155289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.155304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.155331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.155346 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.173239 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.187852 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.215141 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.230624 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.246106 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.258650 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.258970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.259080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.259210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.259320 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.263925 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.362779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.362841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.362854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.362877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.362888 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.465824 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.465869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.465880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.465897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.465907 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.568239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.568292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.568305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.568324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.568340 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.671480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.671530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.671543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.671561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.671574 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.691401 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/1.log" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.694770 4853 scope.go:117] "RemoveContainer" containerID="fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.694959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.708323 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.724048 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.740791 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.743145 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.743411 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:11:58.743373346 +0000 UTC m=+117.583996132 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.747465 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.747483 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.747519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.747537 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.747597 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.747699 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.747820 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.747989 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.754767 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.774422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.774482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.774509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.774532 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.774549 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.777468 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.794060 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.810651 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.825874 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.844180 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.844244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.844333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.844372 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844380 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844394 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844489 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844512 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:58.844443855 +0000 UTC m=+117.685066491 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844535 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844555 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:58.844538818 +0000 UTC m=+117.685161444 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844559 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844610 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:58.844596109 +0000 UTC m=+117.685218745 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844795 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844844 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844874 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:26 crc kubenswrapper[4853]: E1122 07:11:26.844972 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:11:58.844941989 +0000 UTC m=+117.685564635 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.845124 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.860054 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.876817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.876861 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.876873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.876895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.876909 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.881783 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.897982 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.914383 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.930448 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.954241 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.970589 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.980366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.980434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.980446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.980465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.980477 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:26Z","lastTransitionTime":"2025-11-22T07:11:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:26 crc kubenswrapper[4853]: I1122 07:11:26.988647 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:26Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.010905 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:27Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.083918 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.083958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.083968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.083985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.083996 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.187106 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.187402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.187505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.187569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.187634 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.290655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.291897 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.291934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.291959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.291972 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.395730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.395791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.395802 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.395824 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.395836 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.498462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.498516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.498568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.498608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.498620 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.602290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.602357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.602374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.602405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.602420 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.705242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.705299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.705314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.705341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.705355 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.809174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.809230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.809246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.809267 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.809281 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.912154 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.912215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.912228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.912251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:27 crc kubenswrapper[4853]: I1122 07:11:27.912265 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:27Z","lastTransitionTime":"2025-11-22T07:11:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.015133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.015187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.015200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.015223 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.015239 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.118838 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.118893 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.118907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.118926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.118941 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.221707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.222057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.222136 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.222220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.222333 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.325545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.325598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.325612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.325632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.325646 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.429727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.429859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.429877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.430326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.430389 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.533114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.533157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.533166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.533184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.533196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.636039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.636097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.636110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.636135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.636148 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.740194 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.740245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.740257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.740278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.740295 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.747728 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.747782 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.747842 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.747863 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:28 crc kubenswrapper[4853]: E1122 07:11:28.747923 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:28 crc kubenswrapper[4853]: E1122 07:11:28.747993 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:28 crc kubenswrapper[4853]: E1122 07:11:28.748035 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:28 crc kubenswrapper[4853]: E1122 07:11:28.748103 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.843505 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.843733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.843745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.843791 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.843805 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.946886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.946936 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.946947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.946969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:28 crc kubenswrapper[4853]: I1122 07:11:28.946990 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:28Z","lastTransitionTime":"2025-11-22T07:11:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.051227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.051280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.051294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.051318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.051332 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.155317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.155372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.155382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.155409 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.155422 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.258898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.258953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.258966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.258988 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.259001 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.362012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.362073 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.362089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.362110 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.362125 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.472400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.472485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.472506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.472544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.472565 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.576210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.576288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.576307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.576333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.576355 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.679199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.679286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.679315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.679348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.679371 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.782124 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.782171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.782183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.782204 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.782217 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.884648 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.884703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.884714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.884732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.884765 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.987313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.987365 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.987383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.987400 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:29 crc kubenswrapper[4853]: I1122 07:11:29.987413 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:29Z","lastTransitionTime":"2025-11-22T07:11:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.090428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.090476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.090486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.090500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.090510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.194242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.194305 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.194317 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.194341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.194352 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.297117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.297193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.297211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.297234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.297249 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.399773 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.399832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.399845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.399869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.399879 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.501905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.501953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.501966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.501985 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.501998 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.605707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.605783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.605799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.605819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.605833 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.709769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.709812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.709822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.709843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.709854 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.746676 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.746800 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.746847 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.746873 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.746810 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.746968 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.747080 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.747239 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.781452 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.809702 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.819424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.819464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.819474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.819493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.819505 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.835014 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.857126 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.864120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.864174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.864191 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.864209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.864222 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.880808 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.886075 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.890229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.890282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.890293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.890310 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.890321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.904301 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.907477 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.908614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.908650 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.908659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.908678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.908688 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.922698 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.924154 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.928368 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.928409 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.928421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.928442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.928455 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.939852 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.941431 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.947241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.947279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.947291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.947315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.947334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.960245 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.963640 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: E1122 07:11:30.963896 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.965845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.965887 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.965899 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.965922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.965937 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:30Z","lastTransitionTime":"2025-11-22T07:11:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.973769 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:30 crc kubenswrapper[4853]: I1122 07:11:30.990043 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.007766 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.021118 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.043830 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.059293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.068658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.068738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.068770 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.068790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.068807 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.079319 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.090812 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.097893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:31 crc kubenswrapper[4853]: E1122 07:11:31.098064 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:31 crc kubenswrapper[4853]: E1122 07:11:31.098139 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:11:47.098119641 +0000 UTC m=+105.938742267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.104304 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.114509 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.171858 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.171904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.171916 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.171935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.171945 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.275446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.275509 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.275525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.275548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.275561 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.379587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.379630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.379641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.379658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.379676 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.482848 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.482902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.482942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.482961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.482973 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.586466 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.586517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.586528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.586549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.586562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.689967 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.690057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.690071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.690092 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.690107 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.793031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.793119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.793135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.793157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.793196 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.896093 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.896174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.896190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.896209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.896220 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.999531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.999618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.999663 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:31 crc kubenswrapper[4853]: I1122 07:11:31.999684 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:31.999696 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:31Z","lastTransitionTime":"2025-11-22T07:11:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.103210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.103260 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.103271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.103295 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.103306 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.206822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.206892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.206905 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.206923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.206936 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.309485 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.309525 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.309537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.309551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.309562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.412554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.412662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.412691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.412723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.412815 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.515348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.515392 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.515403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.515425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.515436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.618288 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.618330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.618340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.618360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.618371 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.720360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.720629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.720664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.720790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.720812 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.747222 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.747369 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:32 crc kubenswrapper[4853]: E1122 07:11:32.747407 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:32 crc kubenswrapper[4853]: E1122 07:11:32.747571 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.747662 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:32 crc kubenswrapper[4853]: E1122 07:11:32.747729 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.747840 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:32 crc kubenswrapper[4853]: E1122 07:11:32.747914 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.824198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.824271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.824287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.824313 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.824335 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.926807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.926859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.926872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.926888 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:32 crc kubenswrapper[4853]: I1122 07:11:32.926903 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:32Z","lastTransitionTime":"2025-11-22T07:11:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.030335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.030419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.030433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.030458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.030474 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.133787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.133847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.133857 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.133877 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.133890 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.237037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.237088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.237098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.237115 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.237128 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.339734 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.339799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.339811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.339835 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.339849 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.442241 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.442301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.442312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.442333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.442344 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.545588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.545651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.545662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.545679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.545690 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.648625 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.648673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.648682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.648699 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.648709 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.752172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.752217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.752227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.752246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.752258 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.855094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.855153 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.855169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.855192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.855205 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.959335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.959390 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.959404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.959426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:33 crc kubenswrapper[4853]: I1122 07:11:33.959442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:33Z","lastTransitionTime":"2025-11-22T07:11:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.062548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.062615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.062633 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.062656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.062677 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.166345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.166410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.166424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.166444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.166457 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.269293 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.269355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.269372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.269391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.269409 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.373008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.373064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.373076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.373098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.373111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.475363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.475412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.475420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.475433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.475442 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.578102 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.578161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.578172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.578192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.578204 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.682178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.682234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.682244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.682266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.682277 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.747411 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.747513 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:34 crc kubenswrapper[4853]: E1122 07:11:34.747599 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.747554 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.747513 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:34 crc kubenswrapper[4853]: E1122 07:11:34.747692 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:34 crc kubenswrapper[4853]: E1122 07:11:34.747713 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:34 crc kubenswrapper[4853]: E1122 07:11:34.747794 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.785728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.785805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.785819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.785839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.785851 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.889034 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.889079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.889091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.889111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.889123 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.991743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.991807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.991820 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.991842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:34 crc kubenswrapper[4853]: I1122 07:11:34.992058 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:34Z","lastTransitionTime":"2025-11-22T07:11:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.095670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.095733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.095787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.095821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.095838 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.199911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.199976 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.199993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.200022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.200078 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.302915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.302970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.302980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.303000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.303013 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.405703 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.405785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.405800 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.405827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.405841 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.509362 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.509435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.509456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.509477 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.509489 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.613290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.613384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.613397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.613418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.613433 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.722172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.722254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.722265 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.722292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.722305 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.762120 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.777213 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.791903 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.804094 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.819718 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.825794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.825833 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.825844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.825863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.825876 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.839599 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.861188 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.880581 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.901437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.916420 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.928149 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.928229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.928242 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.928262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.928274 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:35Z","lastTransitionTime":"2025-11-22T07:11:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.929903 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.940662 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.955570 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.968574 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.981890 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:35 crc kubenswrapper[4853]: I1122 07:11:35.991816 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:35Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.008176 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:36Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.022548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:36Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.030953 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.031020 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.031039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.031064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.031081 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.133053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.133423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.133454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.133740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.133815 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.237520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.237602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.237621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.237647 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.237666 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.341113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.341527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.341548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.341568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.341582 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.446083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.446172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.446192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.446219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.446240 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.549550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.549627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.549651 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.549683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.549707 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.653499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.653572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.653586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.653608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.653625 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.747547 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.747620 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.748046 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.748091 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:36 crc kubenswrapper[4853]: E1122 07:11:36.748223 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:36 crc kubenswrapper[4853]: E1122 07:11:36.748401 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:36 crc kubenswrapper[4853]: E1122 07:11:36.748542 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:36 crc kubenswrapper[4853]: E1122 07:11:36.748824 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.748926 4853 scope.go:117] "RemoveContainer" containerID="fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.757248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.757351 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.757374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.757452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.757510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.860816 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.860872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.860882 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.860903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.860917 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.964240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.964298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.964312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.964335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:36 crc kubenswrapper[4853]: I1122 07:11:36.964350 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:36Z","lastTransitionTime":"2025-11-22T07:11:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.067480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.069035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.069700 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.069743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.069818 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.173397 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.173444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.173456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.173476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.173491 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.276482 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.276555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.276570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.276596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.276612 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.379760 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.379818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.379831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.379854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.379869 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.484196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.484688 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.484715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.484736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.484769 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.592382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.592431 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.592439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.592456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.592469 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.694723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.694787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.694800 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.694819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.694832 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.737656 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/1.log" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.740953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.741512 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.756027 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.769871 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.793815 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.798076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.798222 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.798308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.798450 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.798554 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.821632 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.835481 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.850310 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.863471 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.876004 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.900666 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.901488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.901542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.901553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.901578 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.901600 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:37Z","lastTransitionTime":"2025-11-22T07:11:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.917233 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.931694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.945119 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.958975 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.974415 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:37 crc kubenswrapper[4853]: I1122 07:11:37.993245 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:37Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.003688 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.005501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.005554 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.005568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.005592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.005608 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.025401 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.039669 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.108523 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.108598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.108610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.108631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.108645 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.212338 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.212421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.212440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.212925 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.213541 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.316315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.316364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.316377 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.316399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.316412 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.420016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.420081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.420099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.420123 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.420160 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.524972 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.525086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.525115 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.525150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.525175 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.629170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.629322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.629340 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.629364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.629383 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.732326 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.732383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.732394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.732412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.732425 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.746674 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.746822 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:38 crc kubenswrapper[4853]: E1122 07:11:38.747274 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.746835 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:38 crc kubenswrapper[4853]: E1122 07:11:38.747508 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.746822 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:38 crc kubenswrapper[4853]: E1122 07:11:38.747771 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:38 crc kubenswrapper[4853]: E1122 07:11:38.747334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.747914 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/2.log" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.749047 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/1.log" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.753399 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" exitCode=1 Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.753559 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.753733 4853 scope.go:117] "RemoveContainer" containerID="fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.754553 4853 scope.go:117] "RemoveContainer" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" Nov 22 07:11:38 crc kubenswrapper[4853]: E1122 07:11:38.755089 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.774561 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.797254 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.809714 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.833305 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.835959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.836010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.836024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.836046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.836062 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.847829 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.865485 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.882942 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.902150 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.916586 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.939528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.939599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.939615 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.939638 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.939654 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:38Z","lastTransitionTime":"2025-11-22T07:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.943828 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb3ade875e1b6d2182a278be4708aa0e473e27bb2f8a06ba5dc97ca0d03f629b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:24Z\\\",\\\"message\\\":\\\":160\\\\nI1122 07:11:24.426832 6516 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.426933 6516 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.426998 6516 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1122 07:11:24.427601 6516 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:11:24.427608 6516 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1122 07:11:24.427783 6516 factory.go:656] Stopping watch factory\\\\nI1122 07:11:24.427832 6516 handler.go:208] Removed *v1.Node event handler 2\\\\nI1122 07:11:24.430415 6516 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1122 07:11:24.430434 6516 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1122 07:11:24.430512 6516 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:11:24.430561 6516 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:11:24.430654 6516 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.958436 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.974434 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:38 crc kubenswrapper[4853]: I1122 07:11:38.991616 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.020458 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.034520 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.042958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.043058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.043083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.043107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.043125 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.048570 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.062066 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.074503 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.150490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.150566 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.150582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.150605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.150621 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.254643 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.254793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.254814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.254838 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.254855 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.358205 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.358279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.358297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.358324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.358346 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.461958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.462035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.462053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.462080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.462100 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.566015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.566067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.566077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.566097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.566109 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.669982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.670058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.670077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.670107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.670128 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.760123 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/2.log" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.765436 4853 scope.go:117] "RemoveContainer" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" Nov 22 07:11:39 crc kubenswrapper[4853]: E1122 07:11:39.765730 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.772984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.773088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.773109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.773138 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.773163 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.786078 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.808597 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.829512 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.847571 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.865019 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.875476 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.875558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.875586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.875617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.875644 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.882275 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.917317 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.933978 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.957609 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.972428 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.978668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.978718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.978731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.978771 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.978783 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:39Z","lastTransitionTime":"2025-11-22T07:11:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:39 crc kubenswrapper[4853]: I1122 07:11:39.993601 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.012414 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.033411 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.049446 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.066526 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.081325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.081373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.081383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.081404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.081416 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.083402 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.100680 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.115134 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.184270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.184354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.184367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.184381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.184392 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.287910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.288453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.288469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.288500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.288518 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.393239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.393312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.393329 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.393354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.393373 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.496151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.496214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.496232 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.496259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.496275 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.599849 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.599909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.599926 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.599951 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.599973 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.701968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.702017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.702033 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.702056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.702070 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.747604 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.747724 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.748251 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.748484 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:40 crc kubenswrapper[4853]: E1122 07:11:40.748553 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:40 crc kubenswrapper[4853]: E1122 07:11:40.748635 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:40 crc kubenswrapper[4853]: E1122 07:11:40.748690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:40 crc kubenswrapper[4853]: E1122 07:11:40.748443 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.804107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.804156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.804169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.804186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.804199 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.907616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.908598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.908678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.908710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:40 crc kubenswrapper[4853]: I1122 07:11:40.908778 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:40Z","lastTransitionTime":"2025-11-22T07:11:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.012740 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.012838 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.012872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.012902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.012922 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.116958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.117022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.117055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.117083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.117103 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.174111 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.174602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.174829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.175071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.175282 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.196387 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:41Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.202838 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.202927 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.202945 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.202973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.202993 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.226302 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:41Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.231952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.232015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.232040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.232070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.232093 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.253196 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:41Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.258772 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.258828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.258844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.258868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.258887 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.279052 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:41Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.285157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.285205 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.285220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.285238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.285252 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.305145 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:41Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:41 crc kubenswrapper[4853]: E1122 07:11:41.305935 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.308038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.308087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.308101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.308124 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.308136 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.411244 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.411314 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.411337 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.411370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.411395 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.514955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.515032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.515056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.515087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.515111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.619368 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.619447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.619465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.619493 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.619512 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.723406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.723456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.723471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.723492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.723506 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.826888 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.826954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.826973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.827001 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.827020 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.931401 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.931481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.931502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.931529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:41 crc kubenswrapper[4853]: I1122 07:11:41.931546 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:41Z","lastTransitionTime":"2025-11-22T07:11:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.034560 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.034616 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.034632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.034659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.034680 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.137515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.137587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.137605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.137631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.137650 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.240065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.240125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.240143 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.240164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.240177 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.347567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.347617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.347627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.347645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.347657 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.450646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.450734 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.450792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.450822 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.450841 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.554301 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.554374 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.554395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.554423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.554441 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.657048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.657094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.657105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.657120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.657128 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.747434 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.747524 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:42 crc kubenswrapper[4853]: E1122 07:11:42.747612 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:42 crc kubenswrapper[4853]: E1122 07:11:42.747791 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.747523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:42 crc kubenswrapper[4853]: E1122 07:11:42.747977 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.748079 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:42 crc kubenswrapper[4853]: E1122 07:11:42.748302 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.759097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.759151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.759165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.759183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.759194 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.862372 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.862442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.862460 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.862489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.862510 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.965575 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.965670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.965697 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.965725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:42 crc kubenswrapper[4853]: I1122 07:11:42.965784 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:42Z","lastTransitionTime":"2025-11-22T07:11:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.069089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.069168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.069186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.069214 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.069233 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.172052 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.172126 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.172146 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.172172 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.172195 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.275783 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.275854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.275871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.275892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.275907 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.378865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.378931 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.378947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.378969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.378984 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.482178 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.482249 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.482267 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.482287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.482304 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.586202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.586312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.586333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.586367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.586388 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.690228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.690297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.690310 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.690335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.690351 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.765305 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.794655 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.794721 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.794738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.794788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.794809 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.897320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.897388 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.897407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.897434 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:43 crc kubenswrapper[4853]: I1122 07:11:43.897453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:43Z","lastTransitionTime":"2025-11-22T07:11:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.001088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.001231 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.001250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.001276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.001295 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.104428 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.104499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.104534 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.104576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.104600 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.208278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.208352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.208368 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.208391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.208407 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.312519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.312603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.312627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.312660 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.312687 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.416109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.416197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.416224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.416257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.416282 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.519811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.519867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.519879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.519900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.519913 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.622898 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.622979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.622997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.623025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.623043 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.726860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.726929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.726952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.726982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.727021 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.746726 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.746853 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.746956 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:44 crc kubenswrapper[4853]: E1122 07:11:44.747059 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.747074 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:44 crc kubenswrapper[4853]: E1122 07:11:44.747245 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:44 crc kubenswrapper[4853]: E1122 07:11:44.747399 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:44 crc kubenswrapper[4853]: E1122 07:11:44.747498 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.830024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.830075 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.830091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.830108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.830121 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.933581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.933661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.933683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.933711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:44 crc kubenswrapper[4853]: I1122 07:11:44.933731 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:44Z","lastTransitionTime":"2025-11-22T07:11:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.037501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.037553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.037565 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.037582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.037597 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.141180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.141262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.141280 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.141304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.141322 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.245283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.245331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.245343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.245363 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.245376 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.348928 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.349007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.349025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.349051 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.349071 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.453606 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.453689 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.453707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.453732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.453780 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.556707 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.556819 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.556844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.556881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.556911 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.660023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.660087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.660100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.660120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.660132 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.762672 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.762723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.762733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.762765 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.762775 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.763104 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.782808 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.797309 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.815880 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.832190 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.854240 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.865484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.865559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.865583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.865609 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.865628 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.868992 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.885177 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.909445 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.930729 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.952929 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.969276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.969360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.970100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.970464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.970707 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:45Z","lastTransitionTime":"2025-11-22T07:11:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.973730 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:45 crc kubenswrapper[4853]: I1122 07:11:45.998691 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:45Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.024307 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.043356 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.067440 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.073336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.073406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.073429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.073462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.073487 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.082362 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.116083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.134806 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:46Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.177577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.177630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.177645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.177679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.177696 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.281696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.281843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.281873 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.281909 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.281933 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.385196 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.385269 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.385289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.385316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.385337 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.488384 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.488421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.488433 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.488453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.488466 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.591168 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.591226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.591239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.591257 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.591269 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.694361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.694419 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.694439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.694465 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.694486 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.747061 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.747061 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.747199 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.747307 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:46 crc kubenswrapper[4853]: E1122 07:11:46.747525 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:46 crc kubenswrapper[4853]: E1122 07:11:46.747687 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:46 crc kubenswrapper[4853]: E1122 07:11:46.747919 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:46 crc kubenswrapper[4853]: E1122 07:11:46.748074 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.798125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.798209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.798229 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.798259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.798285 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.901668 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.901725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.901745 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.901792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:46 crc kubenswrapper[4853]: I1122 07:11:46.901811 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:46Z","lastTransitionTime":"2025-11-22T07:11:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.005150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.005402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.005420 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.005446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.005466 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.108839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.108930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.108950 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.108978 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.109001 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.185867 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:47 crc kubenswrapper[4853]: E1122 07:11:47.186145 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:47 crc kubenswrapper[4853]: E1122 07:11:47.186267 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:12:19.186237604 +0000 UTC m=+138.026860260 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.212809 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.212889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.212910 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.212938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.212956 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.316035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.316087 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.316098 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.316117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.316130 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.418938 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.418996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.419013 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.419038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.419056 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.522176 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.522247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.522266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.522290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.522309 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.626484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.626580 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.626612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.626645 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.626670 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.730170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.730222 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.730236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.730255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.730271 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.833291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.833339 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.833350 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.833369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.833379 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.937378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.937442 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.937459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.937483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:47 crc kubenswrapper[4853]: I1122 07:11:47.937502 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:47Z","lastTransitionTime":"2025-11-22T07:11:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.040690 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.040789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.040806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.040832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.040852 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.144954 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.145018 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.145035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.145060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.145080 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.250832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.250984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.251032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.251069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.251095 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.355892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.355977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.355996 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.356022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.356041 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.460054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.460163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.460181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.460208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.460229 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.564911 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.565082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.565113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.565142 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.565161 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.668380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.668453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.668471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.668498 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.668519 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.747893 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.747958 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.747973 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.747979 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:48 crc kubenswrapper[4853]: E1122 07:11:48.748306 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:48 crc kubenswrapper[4853]: E1122 07:11:48.748471 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:48 crc kubenswrapper[4853]: E1122 07:11:48.748662 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:48 crc kubenswrapper[4853]: E1122 07:11:48.748965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.772328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.772381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.772398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.772421 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.772439 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.875994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.876065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.876081 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.876100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.876114 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.978662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.978714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.978726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.978804 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:48 crc kubenswrapper[4853]: I1122 07:11:48.978821 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:48Z","lastTransitionTime":"2025-11-22T07:11:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.082278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.082335 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.082348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.082369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.082385 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.185325 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.185366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.185378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.185394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.185406 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.289032 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.289104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.289133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.289167 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.289188 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.392165 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.392221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.392233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.392252 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.392267 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.495510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.495579 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.495599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.495628 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.495648 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.598730 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.598806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.598824 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.598844 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.598858 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.701853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.701902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.701912 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.701930 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.701941 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.806012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.806067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.806077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.806099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.806111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.909297 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.909398 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.909417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.909445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:49 crc kubenswrapper[4853]: I1122 07:11:49.909464 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:49Z","lastTransitionTime":"2025-11-22T07:11:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.012088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.012122 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.012132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.012148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.012161 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.114343 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.114440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.114453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.114471 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.114483 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.216862 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.216907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.216918 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.216940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.216953 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.320101 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.320166 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.320177 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.320197 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.320208 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.423345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.423413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.423429 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.423451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.423471 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.527140 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.527230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.527251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.527282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.527301 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.630828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.631044 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.631085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.631117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.631141 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.734082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.734147 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.734163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.734186 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.734209 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.747697 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.747730 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.747735 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.747745 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:50 crc kubenswrapper[4853]: E1122 07:11:50.747916 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:50 crc kubenswrapper[4853]: E1122 07:11:50.748067 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:50 crc kubenswrapper[4853]: E1122 07:11:50.748229 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:50 crc kubenswrapper[4853]: E1122 07:11:50.748317 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.839520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.839600 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.839621 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.839649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.839671 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.942859 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.942923 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.942935 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.942955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:50 crc kubenswrapper[4853]: I1122 07:11:50.942968 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:50Z","lastTransitionTime":"2025-11-22T07:11:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.046381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.046459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.046474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.046490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.046503 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.149255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.149331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.149342 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.149359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.149370 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.252675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.252799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.252823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.252852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.252880 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.357744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.357855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.357874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.357904 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.357925 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.462048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.462597 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.462728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.462828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.462856 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.566725 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.566841 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.566862 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.566890 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.566909 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.671025 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.671104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.671127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.671152 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.671171 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.687355 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.687416 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.687452 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.687474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.687487 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.708441 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:51Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.715067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.715120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.715131 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.715151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.715173 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.734090 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:51Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.740029 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.740076 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.740089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.740107 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.740122 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.758779 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:51Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.764913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.764965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.764980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.764999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.765016 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.781708 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:51Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.787289 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.787357 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.787380 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.787413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.787437 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.803828 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:51Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:51 crc kubenswrapper[4853]: E1122 07:11:51.804134 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.807148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.807199 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.807209 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.807227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.807240 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.910834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.910892 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.910906 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.910924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:51 crc kubenswrapper[4853]: I1122 07:11:51.910937 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:51Z","lastTransitionTime":"2025-11-22T07:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.014336 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.014408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.014423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.014480 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.014494 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.118195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.118271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.118283 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.118304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.118318 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.221015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.221082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.221104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.221132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.221150 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.324487 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.324539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.324551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.324570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.324581 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.427984 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.428068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.428079 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.428099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.428111 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.531989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.532054 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.532064 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.532084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.532095 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.635385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.635458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.635468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.635486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.635497 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.738914 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.738968 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.738979 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.739000 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.739012 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.747440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.747499 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:52 crc kubenswrapper[4853]: E1122 07:11:52.747629 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.747451 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:52 crc kubenswrapper[4853]: E1122 07:11:52.747809 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.747860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:52 crc kubenswrapper[4853]: E1122 07:11:52.747910 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:52 crc kubenswrapper[4853]: E1122 07:11:52.747959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.841226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.841281 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.841299 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.841318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.841334 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.945682 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.945800 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.945827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.945866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:52 crc kubenswrapper[4853]: I1122 07:11:52.945889 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:52Z","lastTransitionTime":"2025-11-22T07:11:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.049742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.049866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.049920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.049946 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.049963 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.153522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.153562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.153573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.153590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.153601 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.257402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.257458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.257469 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.257490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.257502 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.360784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.360840 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.360853 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.360871 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.360884 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.464395 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.464473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.464497 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.464530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.464553 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.568036 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.568088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.568100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.568117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.568131 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.672352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.672437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.672462 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.672492 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.672521 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.748488 4853 scope.go:117] "RemoveContainer" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" Nov 22 07:11:53 crc kubenswrapper[4853]: E1122 07:11:53.749061 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.776461 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.776526 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.776542 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.776567 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.776583 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.880304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.880369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.880394 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.880422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.880446 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.984014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.984058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.984067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.984086 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:53 crc kubenswrapper[4853]: I1122 07:11:53.984098 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:53Z","lastTransitionTime":"2025-11-22T07:11:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.087895 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.087997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.088016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.088044 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.088065 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.190735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.190805 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.190821 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.190845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.190858 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.294056 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.294137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.294155 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.294181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.294230 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.396924 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.396977 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.396993 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.397014 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.397028 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.500221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.500291 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.500307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.500328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.500345 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.603852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.603939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.603965 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.603997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.604036 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.707324 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.707412 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.707426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.707451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.707469 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.747332 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.747466 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.747466 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:54 crc kubenswrapper[4853]: E1122 07:11:54.747551 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.747599 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:54 crc kubenswrapper[4853]: E1122 07:11:54.747833 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:54 crc kubenswrapper[4853]: E1122 07:11:54.747955 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:54 crc kubenswrapper[4853]: E1122 07:11:54.748053 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.810349 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.810424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.810444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.810474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.810495 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.914371 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.914435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.914444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.914459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:54 crc kubenswrapper[4853]: I1122 07:11:54.914470 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:54Z","lastTransitionTime":"2025-11-22T07:11:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.017094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.017162 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.017181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.017207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.017227 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.120255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.120300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.120311 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.120328 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.120343 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.223424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.223534 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.223559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.223592 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.223615 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.326344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.326406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.326418 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.326435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.326477 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.428622 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.428704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.428718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.428742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.428775 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.531658 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.531714 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.531728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.531767 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.531782 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.635206 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.635273 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.635284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.635303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.635317 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.739510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.739577 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.739594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.739618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.739636 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.766678 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.788519 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.803666 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.820388 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.837257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.842595 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.842644 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.842659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.842680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.842695 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.858240 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.877549 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.891694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.918404 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.937054 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.946035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.946122 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.946146 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.946187 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.946211 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:55Z","lastTransitionTime":"2025-11-22T07:11:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.962694 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.977703 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:55 crc kubenswrapper[4853]: I1122 07:11:55.998515 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:55Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.016953 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.034971 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.049941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.050015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.050035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.050055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.050069 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.058037 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.075064 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.094717 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.112682 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.152922 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.152983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.152999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.153028 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.153046 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.256641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.256677 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.256686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.256704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.256716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.359944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.359980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.359989 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.360009 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.360023 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.462360 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.462399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.462408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.462424 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.462434 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.565823 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.565881 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.565896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.565913 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.565926 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.669255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.669303 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.669315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.669331 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.669343 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.747806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.747905 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.747806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:56 crc kubenswrapper[4853]: E1122 07:11:56.748060 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.748271 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:56 crc kubenswrapper[4853]: E1122 07:11:56.748422 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:56 crc kubenswrapper[4853]: E1122 07:11:56.748684 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:56 crc kubenswrapper[4853]: E1122 07:11:56.748797 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.772920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.772970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.772980 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.772997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.773009 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.875555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.875596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.875605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.875623 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.875636 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.979208 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.979251 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.979261 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.979284 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:56 crc kubenswrapper[4853]: I1122 07:11:56.979310 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:56Z","lastTransitionTime":"2025-11-22T07:11:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.082729 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.082852 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.082874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.082902 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.082925 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.185982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.186048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.186062 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.186083 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.186098 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.288948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.289024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.289039 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.289059 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.289072 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.391515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.391559 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.391569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.391588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.391600 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.494652 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.494718 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.494733 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.494781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.494798 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.597529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.597570 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.597586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.597605 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.597616 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.700383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.700426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.700440 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.700459 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.700471 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.804171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.804234 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.804253 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.804275 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.804289 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.837159 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/0.log" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.837243 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbbe3472-17cc-48dd-8e46-393b00149429" containerID="5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d" exitCode=1 Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.837315 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerDied","Data":"5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.838055 4853 scope.go:117] "RemoveContainer" containerID="5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.861044 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.876296 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.892819 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.907636 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.907692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.907704 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.907722 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.907735 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:57Z","lastTransitionTime":"2025-11-22T07:11:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.909793 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.923643 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.937632 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.948738 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.964022 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.980179 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:57 crc kubenswrapper[4853]: I1122 07:11:57.991276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.009094 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.010795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.010850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.010865 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.010886 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.010901 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.026296 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.043688 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.056568 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.072163 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.086369 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.098484 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.113955 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.114027 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.114041 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.114065 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.114079 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.121908 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.134805 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.217198 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.217259 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.217270 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.217287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.217298 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.319741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.319803 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.319817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.319833 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.319842 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.422294 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.422351 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.422369 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.422396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.422410 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.525096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.525200 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.525220 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.525254 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.525272 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.628692 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.628731 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.628743 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.628782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.628793 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.768991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.769087 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.769019 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.769016 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.769179 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.769324 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.769495 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.769591 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.770942 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.770986 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.770999 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.771095 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.771107 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.829221 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.829515 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:02.829492854 +0000 UTC m=+181.670115480 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.843948 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/0.log" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.844049 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerStarted","Data":"338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.860289 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.872313 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.874272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.874315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.874330 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.874353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.874365 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.888103 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.914639 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.930644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.930715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.930769 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.930804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.930958 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931022 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931037 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931071 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931114 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931128 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931161 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:02.931130179 +0000 UTC m=+181.771752825 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931051 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931197 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:02.9311712 +0000 UTC m=+181.771793826 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.930974 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931240 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:02.931220901 +0000 UTC m=+181.771843527 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:11:58 crc kubenswrapper[4853]: E1122 07:11:58.931263 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:02.931253432 +0000 UTC m=+181.771876298 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.933155 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.948570 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.965178 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.977438 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.977489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.977502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.977521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.977533 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:58Z","lastTransitionTime":"2025-11-22T07:11:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:58 crc kubenswrapper[4853]: I1122 07:11:58.987008 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.007073 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.021777 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.039126 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.051895 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.076050 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.081292 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.081348 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.081358 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.081378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.081393 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.089315 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.101807 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.113698 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.125156 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.136821 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.148189 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:59Z is after 2025-08-24T17:21:41Z" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.183448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.183501 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.183515 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.183535 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.183551 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.287656 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.287727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.287762 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.287789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.287802 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.390941 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.391022 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.391042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.391067 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.391087 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.493807 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.493889 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.493915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.493947 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.493972 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.596685 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.596732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.596742 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.596778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.596789 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.699451 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.699491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.699500 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.699520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.699530 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.802713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.802801 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.802814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.802836 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.802851 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.905496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.905550 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.905562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.905583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:11:59 crc kubenswrapper[4853]: I1122 07:11:59.905600 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:11:59Z","lastTransitionTime":"2025-11-22T07:11:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.009048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.009114 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.009132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.009158 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.009177 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.111464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.111507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.111516 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.111532 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.111542 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.214484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.214529 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.214540 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.214561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.214574 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.318389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.318481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.318510 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.318543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.318566 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.422190 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.422262 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.422278 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.422306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.422332 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.525341 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.525387 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.525399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.525414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.525424 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.628202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.628264 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.628274 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.628295 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.628308 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.732224 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.732272 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.732287 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.732308 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.732321 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.746933 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.746982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.746982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.747003 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:00 crc kubenswrapper[4853]: E1122 07:12:00.747198 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:00 crc kubenswrapper[4853]: E1122 07:12:00.747305 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:00 crc kubenswrapper[4853]: E1122 07:12:00.747407 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:00 crc kubenswrapper[4853]: E1122 07:12:00.747471 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.835163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.835210 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.835221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.835238 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.835250 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.939007 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.939097 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.939119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.939150 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:00 crc kubenswrapper[4853]: I1122 07:12:00.939173 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:00Z","lastTransitionTime":"2025-11-22T07:12:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.042944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.043016 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.043037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.043068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.043089 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.145982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.146048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.146058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.146084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.146098 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.252794 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.253353 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.253386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.253414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.253433 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.356741 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.356831 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.356846 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.357233 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.357278 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.460591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.460673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.460744 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.460806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.460824 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.563484 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.563546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.563558 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.563581 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.563595 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.666670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.666727 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.666737 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.666778 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.666792 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.770247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.770290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.770300 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.770316 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.770326 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.873354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.873414 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.873425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.873445 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.873454 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.933464 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.933513 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.933522 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.933541 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.933554 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: E1122 07:12:01.946647 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.950982 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.951038 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.951050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.951078 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.951094 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: E1122 07:12:01.964766 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.969344 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.969410 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.969423 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.969444 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.969456 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:01 crc kubenswrapper[4853]: E1122 07:12:01.982526 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:01Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.988695 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.988792 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.988806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.988833 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:01 crc kubenswrapper[4853]: I1122 07:12:01.988845 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:01Z","lastTransitionTime":"2025-11-22T07:12:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.005235 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.010307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.010373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.010381 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.010405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.010416 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.023715 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.023870 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.025958 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.026037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.026053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.026077 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.026093 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.129221 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.129290 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.129302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.129323 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.129335 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.232793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.232866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.232879 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.232900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.232916 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.335798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.335869 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.335883 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.335934 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.335950 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.439183 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.439240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.439258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.439279 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.439293 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.542654 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.542723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.542736 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.542796 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.542809 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.747289 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.747358 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.747304 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.747503 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.747522 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.747680 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.747791 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:02 crc kubenswrapper[4853]: E1122 07:12:02.747996 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.817812 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.817900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.817917 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.817939 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.817955 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.920813 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.920872 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.920884 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.920903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:02 crc kubenswrapper[4853]: I1122 07:12:02.920916 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:02Z","lastTransitionTime":"2025-11-22T07:12:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.024120 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.024175 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.024189 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.024211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.024225 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.127320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.127370 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.127382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.127404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.127418 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.229712 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.229834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.229847 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.229866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.229879 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.333026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.333088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.333099 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.333123 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.333136 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.435408 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.435496 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.435587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.435620 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.435643 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.542738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.542834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.542850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.542874 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.542893 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.646227 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.646286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.646298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.646318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.646330 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.749435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.749491 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.749502 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.749520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.749531 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.853112 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.853180 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.853192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.853215 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.853230 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.955896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.955952 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.955961 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.955983 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:03 crc kubenswrapper[4853]: I1122 07:12:03.955994 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:03Z","lastTransitionTime":"2025-11-22T07:12:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.059543 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.059590 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.059602 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.059623 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.059636 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.162572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.162626 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.162637 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.162659 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.162673 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.266019 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.266089 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.266105 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.266128 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.266142 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.369048 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.369096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.369109 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.369127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.369138 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.472494 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.472545 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.472555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.472576 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.472590 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.575391 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.575456 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.575473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.575520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.575547 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.679236 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.679286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.679298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.679318 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.679330 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.746882 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.746934 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.746892 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.746899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:04 crc kubenswrapper[4853]: E1122 07:12:04.747071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:04 crc kubenswrapper[4853]: E1122 07:12:04.747191 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:04 crc kubenswrapper[4853]: E1122 07:12:04.747485 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:04 crc kubenswrapper[4853]: E1122 07:12:04.747566 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.782010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.782055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.782066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.782084 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.782096 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.885015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.885066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.885074 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.885090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.885104 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.988720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.988811 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.988826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.988850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:04 crc kubenswrapper[4853]: I1122 07:12:04.988867 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:04Z","lastTransitionTime":"2025-11-22T07:12:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.092031 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.092082 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.092096 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.092116 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.092134 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.195481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.195583 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.195594 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.195613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.195633 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.298997 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.299049 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.299058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.299080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.299095 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.402709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.402799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.402810 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.402827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.402843 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.505029 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.505091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.505104 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.505141 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.505156 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.607896 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.608026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.608037 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.608057 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.608066 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.710490 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.710556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.710574 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.710610 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.710624 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.764973 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.779835 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.801027 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.813547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.813596 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.813608 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.813630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.813642 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.822550 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.843007 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.860947 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.877212 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.894942 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.911529 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.916614 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.916674 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.916686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.916705 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.916716 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:05Z","lastTransitionTime":"2025-11-22T07:12:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.927936 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.944800 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.959697 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.982011 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:05 crc kubenswrapper[4853]: I1122 07:12:05.997937 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.013436 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.019399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.019453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.019468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.019489 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.019503 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.027233 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.043276 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.057449 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.081293 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.122627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.122680 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.122691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.122711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.122724 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.226473 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.226520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.226530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.226547 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.226560 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.329555 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.329603 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.329612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.329632 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.329648 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.432004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.432061 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.432071 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.432090 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.432102 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.534649 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.534711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.534723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.534774 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.534789 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.638446 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.638495 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.638507 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.638527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.638540 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.741851 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.741915 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.741929 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.741948 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.741962 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.747044 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.747178 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:06 crc kubenswrapper[4853]: E1122 07:12:06.747309 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.747385 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.747417 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:06 crc kubenswrapper[4853]: E1122 07:12:06.747518 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:06 crc kubenswrapper[4853]: E1122 07:12:06.747828 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:06 crc kubenswrapper[4853]: E1122 07:12:06.748027 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.844346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.844378 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.844389 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.844403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.844413 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.947219 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.947258 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.947266 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.947285 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:06 crc kubenswrapper[4853]: I1122 07:12:06.947296 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:06Z","lastTransitionTime":"2025-11-22T07:12:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.051304 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.051385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.051407 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.051437 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.051459 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.155170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.155248 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.155286 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.155315 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.155332 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.258617 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.258664 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.258679 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.258702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.258715 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.362362 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.362413 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.362425 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.362443 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.362453 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.464595 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.464661 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.464669 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.464710 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.464721 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.567907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.567963 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.567973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.567995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.568005 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.671118 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.671171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.671184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.671207 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.671222 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.748596 4853 scope.go:117] "RemoveContainer" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.773561 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.773599 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.773611 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.773629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.773640 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.877247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.877309 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.877320 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.877359 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.877373 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.979479 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.979520 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.979530 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.979551 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:07 crc kubenswrapper[4853]: I1122 07:12:07.979562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:07Z","lastTransitionTime":"2025-11-22T07:12:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.082045 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.082121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.082133 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.082157 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.082168 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.185696 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.185784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.185797 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.185817 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.185830 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.289148 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.289217 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.289226 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.289245 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.289256 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.393240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.393361 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.393382 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.393439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.393450 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.496864 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.496981 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.497008 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.497042 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.497068 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.600100 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.600171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.600188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.600211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.600225 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.703790 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.703854 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.703867 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.703894 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.703907 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.747168 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.747202 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.747252 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:08 crc kubenswrapper[4853]: E1122 07:12:08.747334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.747345 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:08 crc kubenswrapper[4853]: E1122 07:12:08.747550 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:08 crc kubenswrapper[4853]: E1122 07:12:08.747936 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:08 crc kubenswrapper[4853]: E1122 07:12:08.748029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.807585 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.807673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.807687 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.807709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.807725 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.888097 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/2.log" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.890877 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.891676 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.910404 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.910430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.910439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.910458 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.910469 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:08Z","lastTransitionTime":"2025-11-22T07:12:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.917960 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.930398 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.947612 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.963027 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.978677 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:08 crc kubenswrapper[4853]: I1122 07:12:08.998243 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.013720 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.013782 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.013795 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.013814 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.013825 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.015822 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.029952 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.044850 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.059375 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.076197 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.090368 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.109346 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.116521 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.116587 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.116604 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.116631 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.116650 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.132153 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.148253 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.165101 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.177720 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.196660 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.212813 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.219768 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.219828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.219843 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.219866 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.219881 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.322937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.322994 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.323004 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.323026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.323037 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.426352 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.426426 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.426448 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.426474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.426492 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.528990 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.529040 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.529050 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.529070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.529080 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.631973 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.632053 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.632088 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.632123 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.632149 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.734405 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.734553 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.734568 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.734588 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.734660 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.837675 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.837723 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.837738 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.837781 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.837795 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.897387 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.897972 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/2.log" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.901339 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" exitCode=1 Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.901383 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.901431 4853 scope.go:117] "RemoveContainer" containerID="0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.902368 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:09 crc kubenswrapper[4853]: E1122 07:12:09.902701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.918175 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.933594 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.940066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.940125 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.940142 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.940169 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.940187 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:09Z","lastTransitionTime":"2025-11-22T07:12:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.957148 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bfe75c62e217cccff97aad20cda18675013af6a3b1b10ef60227be8ea4965fc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:38Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:11:38Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:11:38.093678 6733 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1122 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:09Z\\\",\\\"message\\\":\\\"rvices.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:12:09.361353 7075 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI1122 07:12:09.361379 7075 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 5.654351ms\\\\nF1122 07:12:09.361393 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: ce\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.968083 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.979397 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:09 crc kubenswrapper[4853]: I1122 07:12:09.992316 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.005691 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.018586 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.030273 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.042537 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.042582 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.042593 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.042612 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.042626 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.044257 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.055659 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.068840 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.083233 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.094691 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.113702 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.130111 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.141995 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.145068 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.145103 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.145113 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.145134 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.145143 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.156652 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.170315 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.248121 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.248184 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.248192 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.248213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.248223 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.352035 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.352119 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.352132 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.352151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.352163 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.455181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.455239 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.455255 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.455276 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.455290 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.558453 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.558512 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.558528 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.558549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.558562 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.661486 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.661531 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.661549 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.661573 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.661591 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.747713 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.747825 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.747836 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:10 crc kubenswrapper[4853]: E1122 07:12:10.747940 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.747993 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:10 crc kubenswrapper[4853]: E1122 07:12:10.748154 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:10 crc kubenswrapper[4853]: E1122 07:12:10.748351 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:10 crc kubenswrapper[4853]: E1122 07:12:10.748445 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.764124 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.764228 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.764250 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.764282 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.764306 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.866569 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.866618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.866627 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.866643 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.866659 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.905919 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:10 crc kubenswrapper[4853]: E1122 07:12:10.906122 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.928192 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.946957 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.961088 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.969789 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.969829 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.969839 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.969856 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.969866 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:10Z","lastTransitionTime":"2025-11-22T07:12:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.977081 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:10 crc kubenswrapper[4853]: I1122 07:12:10.995033 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.009445 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.025175 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.037618 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.069053 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.073306 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.073383 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.073402 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.073430 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.073449 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.093124 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.123379 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:09Z\\\",\\\"message\\\":\\\"rvices.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:12:09.361353 7075 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI1122 07:12:09.361379 7075 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 5.654351ms\\\\nF1122 07:12:09.361393 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: ce\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.139940 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.155236 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.170494 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.176364 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.176435 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.176447 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.176504 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.176520 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.188180 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.205021 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.223025 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.236490 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.250063 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.324135 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.324202 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.324218 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.324240 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.324252 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.427706 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.428298 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.428312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.428333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.428348 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.530769 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.530806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.530815 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.530832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.530843 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.633334 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.633375 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.633385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.633403 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.633413 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.736302 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.736346 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.736354 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.736373 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.736383 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.843715 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.843787 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.843799 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.843818 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.843854 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.910614 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.947728 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.947788 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.947828 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.947845 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:11 crc kubenswrapper[4853]: I1122 07:12:11.947861 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:11Z","lastTransitionTime":"2025-11-22T07:12:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.033670 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.033719 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.033735 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.033798 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.033813 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.048886 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.052832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.052878 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.052891 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.052907 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.052919 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.066391 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.071629 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.071678 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.071691 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.071711 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.071725 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.083455 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.086959 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.086998 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.087010 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.087023 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.087031 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.098256 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.101834 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.101919 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.101940 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.101969 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.101991 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.117840 4853 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:12:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d74141ce-7696-4d74-b510-3a9c2c375ecd\\\",\\\"systemUUID\\\":\\\"362c9708-b683-4c02-a83b-39323a200ef4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.118093 4853 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.120454 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.120506 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.120519 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.120538 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.120558 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.225481 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.225646 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.225666 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.225686 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.225699 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.329312 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.329356 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.329367 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.329386 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.329397 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.431827 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.431885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.431900 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.431920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.431937 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.535117 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.535163 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.535174 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.535193 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.535205 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.638327 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.638366 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.638376 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.638396 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.638406 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.741108 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.741161 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.741171 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.741188 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.741198 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.747521 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.747590 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.747638 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.747858 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.748088 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.748204 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.748292 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:12 crc kubenswrapper[4853]: E1122 07:12:12.748429 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.844246 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.844307 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.844322 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.844345 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.844361 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.947673 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.947732 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.947785 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.947808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:12 crc kubenswrapper[4853]: I1122 07:12:12.947821 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:12Z","lastTransitionTime":"2025-11-22T07:12:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.050548 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.050604 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.050618 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.050641 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.050651 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.152917 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.152949 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.152957 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.152970 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.152980 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.256070 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.256127 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.256137 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.256156 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.256171 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.358517 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.358562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.358572 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.358591 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.358600 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.460793 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.460837 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.460850 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.460868 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.460880 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.563779 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.563830 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.563855 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.563880 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.563896 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.667709 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.667808 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.667832 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.667856 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.667870 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.770586 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.770630 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.770642 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.770662 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.770674 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.873702 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.873756 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.873766 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.873784 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.873794 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.976995 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.977058 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.977069 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.977094 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:13 crc kubenswrapper[4853]: I1122 07:12:13.977110 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:13Z","lastTransitionTime":"2025-11-22T07:12:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.080012 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.080055 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.080066 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.080085 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.080099 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.182806 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.182875 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.182885 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.182903 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.182916 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.286060 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.286160 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.286181 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.286211 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.286231 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.389683 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.389826 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.389842 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.389863 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.389878 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.493439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.493474 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.493483 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.493499 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.493509 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.596653 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.596713 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.596726 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.596764 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.596779 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.700944 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.701005 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.701017 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.701043 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.701057 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.747278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.747371 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.747371 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:14 crc kubenswrapper[4853]: E1122 07:12:14.747461 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:14 crc kubenswrapper[4853]: E1122 07:12:14.747539 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.747592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:14 crc kubenswrapper[4853]: E1122 07:12:14.747685 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:14 crc kubenswrapper[4853]: E1122 07:12:14.747888 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.804546 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.804598 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.804613 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.804634 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.804647 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.907860 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.907920 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.907937 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.907960 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:14 crc kubenswrapper[4853]: I1122 07:12:14.907973 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:14Z","lastTransitionTime":"2025-11-22T07:12:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.011080 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.011164 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.011182 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.011213 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.011235 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.114091 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.114151 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.114170 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.114195 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.114213 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.218488 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.218534 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.218544 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.218562 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.218573 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.322130 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.322230 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.322247 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.322271 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.322285 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.425966 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.426015 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.426026 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.426046 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.426059 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.529347 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.529406 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.529417 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.529439 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.529454 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.632333 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.632385 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.632399 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.632422 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.632436 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:15Z","lastTransitionTime":"2025-11-22T07:12:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:15 crc kubenswrapper[4853]: E1122 07:12:15.732999 4853 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.765437 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rvgxj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dbbe3472-17cc-48dd-8e46-393b00149429\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:11:57Z\\\",\\\"message\\\":\\\"2025-11-22T07:11:10+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12\\\\n2025-11-22T07:11:10+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b0993be5-55b1-4411-9f2c-d55ea5267c12 to /host/opt/cni/bin/\\\\n2025-11-22T07:11:12Z [verbose] multus-daemon started\\\\n2025-11-22T07:11:12Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:11:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ct6dm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rvgxj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.784548 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ckn94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f60b37f-d6f5-4145-a3e7-cfe92fca6d77\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bbf630e17f662af8880d1f34d5073a4f64e723987205f7bbb473a73808c7e935\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3adc38c2534c8026e260a63060426102130ca1bcbcb4e3741aaf4e87170a4c7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db0b80c2471d5179ede5c4055fcebaabe08892f1534218952c2a19ab599be022\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://157ea229b1effe5885acafb7b48db5695864ff001b1293c053aefb494f544f7a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://203cba48e280628c33b464c336957085420ca2d345b1acfb5c8b8968ed2317e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10cb7826668319845471d465df1784a2cfba12ec3892e986bcf316c0c5e3c1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc93eb03f5f0cada6bf132a9de4446bc1b8ecafee3769bf18feeecea9d71478f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fcjm2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ckn94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.798937 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bzn4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pd6gs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.825915 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8dcea4b-e887-437a-972d-e6e10d41f725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://753693005f8490e9928780fd4f230a1c8aba453e1ccbca71e70299799bfa46ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64d65e4d54cc8971b6f6349f7d32243b7faa5bb5974b22ed904a36fa34bc69fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0ba60480f038947ca4a2b9f963d38cb004714735d411218eaf6231614c4e21f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c83df4b798499177cfbd1372a939649b6cebb71b50b695af5ba7815cf450669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9e89cbcdfc1ac8430f617ee31a367c03be0a407da8af967930fa821e6a12a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://742d02f24a7096b76f82453ab15e531d4a378b4bbf0bb36eb1a1cb4ae1873165\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2253bab71d535b61f7489679566abc1acf210e0d5f69773fedbb02befab17c2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa71a988b649937f137ffb7533456570bf4fcd199134424fc7bff2b62335f1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.843150 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9c07602dc6c3b45809dd76a7079002e54e0bae24f7e3bc3470b463389303e74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.857174 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.874447 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.888810 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"19e6b5d3-a090-47b9-bddc-dd2aaccd4ff4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://96b61660fcd78f2f81eab2c1a0cb32018a5b32d49581167e33e7dca8ba5ddf2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a0d61ce2e58ce8d9abdd1fb9688722d8e25f8354553aef76b8cd4b3897135a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21be0b3346d0dd20c21a867b8fbd30e6291d09bd1745febbc2face7610f6d831\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e4d5ab3d780b4ec1360dd8a07fc7de5d4daec3a5f8334010949d4fd60b5516b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6dbbb1c22bd116e5913c6bc774b4705f0e4342861ceae30dc0f53c30d6fa39f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\" envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1122 07:10:53.494379 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI1122 07:10:53.495005 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495036 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1122 07:10:53.495110 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1122 07:10:53.495131 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1122 07:10:53.496216 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1763795444\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1763795444\\\\\\\\\\\\\\\" (2025-11-22 06:10:44 +0000 UTC to 2026-11-22 06:10:44 +0000 UTC (now=2025-11-22 07:10:53.49617265 +0000 UTC))\\\\\\\"\\\\nI1122 07:10:53.496261 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1122 07:10:53.496336 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1122 07:10:53.497036 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1122 07:10:53.497137 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1122 07:10:53.498721 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nI1122 07:10:53.500291 1 reflector.go:368] Caches populated for *v1.ConfigMap from k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243\\\\nF1122 07:10:53.501305 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:32Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c1ee716027cb8b663ee9faec3faed873df5079418b9b46574cf3b3a74048eef\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:27Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13ca1fba1325e0ae8556def3ba0c98aa0aff6e3b9ed2aab66a5c950d2bace44e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.901478 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mlpz8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c09200ba-013f-45e3-b581-8523557344b8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3feee491b899c2b2f58330fe2e87ba5404deca4249c5d39a6d3aa08acd10ef29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvk87\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mlpz8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.924490 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"893f7e02-580a-4093-ab42-ea73ffffcfe6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:12:09Z\\\",\\\"message\\\":\\\"rvices.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:12:09.361353 7075 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nI1122 07:12:09.361379 7075 services_controller.go:360] Finished syncing service networking-console-plugin on namespace openshift-network-console for network=default : 5.654351ms\\\\nF1122 07:12:09.361393 7075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: ce\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:12:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-89zdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.938236 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-9nx9m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"120cba0a-6e0b-40b3-8c15-46e7ff7c8641\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c7b7b92862fc6cca812cb5c0aa9b320fbbd11f8b458d591b56cacab985a79edd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4p7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-9nx9m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.955872 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81cf3334-f910-4d46-be00-b3cd66ba8ed4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbfa536576b192a75c2a3eef9dfde8c7f7de0a8e6edae3870f45f1c44cc48163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86bcf649be20aa1cc1bce89663ea6b7b920ac2548fbf8a22c87636bdafc4b329\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6ckff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:13Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nhlw4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:15 crc kubenswrapper[4853]: I1122 07:12:15.971979 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63551235-20f7-4ccc-a132-9f5302e01da5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0368eb3431a7b8e0225b3728f72d601ce3f9639f06175b96818fd798b33e4143\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fc661089ab1f7286ead68fd510a0848c9df8dd41a8ed349d03b43b72e1a369a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdc0676190f39e3a925b593e663ec1af1fa253b8177b7cb81fd49515e0c55c3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a97a0d92e5fdd8906fa4f150c4b7d95fdd1f3b92ac45a46e0a6846fd4a15bb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.006239 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:59Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b27716f9e6c912f968c4a340199cdc2dde0262d86b4832f84cae8daefa831909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fb699e2e8a016fa383c1d4ed2d4f0d69c26b79fb2f0dd735a84de2fd5a23d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.033216 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://347fc1ee4909acd862f23a9f25b4ac7ab7271ff03f0f726479d9b711820970c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.047303 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"476c875a-2b87-419a-8042-0ba059620fd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://babd614ecd140db7467335933695fcc122380ef2c6a3d4ae6e89b0ed29e87d69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:11:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zkhrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:11:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fflvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.061041 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9003355-0dc6-42ad-950d-a7884e333f4c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:11:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f0019c0473b7ccc974dc1659a7ae0565b39d69af17094c420625387ecc9d7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://94704fd95e75549ea29613f00da78d4517cc64896cf1274c9c6f784a61f4ad4a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:10:53Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI1122 07:10:26.514729 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI1122 07:10:26.525241 1 observer_polling.go:159] Starting file observer\\\\nI1122 07:10:26.883412 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI1122 07:10:26.897202 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI1122 07:10:50.420853 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nI1122 07:10:51.525898 1 observer_polling.go:162] Shutting down file observer\\\\nF1122 07:10:53.463148 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9736c38776f22909b8e8a832683d1b881f020257f7183397286e0dd23c973a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30f542bc05dc23bc8dff5029894d08a6fea8333de31a66432814b7bbc412c4fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d8d77818159ca98019aac809db4b1b2460723bc56b9b6728eec11faeefd026\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.072317 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4cc3002-cfaf-47cf-b539-3815924af5c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://666c3902a5cb3b755fe3b5861568b744fb3ffbd28f72d7fd22d18b387486de03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:10:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6143a86b720cb8a29f4ac3d68bf2693f92fb50f528cdd7269e022115795ef14b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:10:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.084318 4853 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:10:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:12:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:12:16 crc kubenswrapper[4853]: E1122 07:12:16.136858 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.747059 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.747076 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.747223 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:16 crc kubenswrapper[4853]: E1122 07:12:16.747239 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:16 crc kubenswrapper[4853]: E1122 07:12:16.747380 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:16 crc kubenswrapper[4853]: I1122 07:12:16.747390 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:16 crc kubenswrapper[4853]: E1122 07:12:16.747529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:16 crc kubenswrapper[4853]: E1122 07:12:16.747655 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:18 crc kubenswrapper[4853]: I1122 07:12:18.747692 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:18 crc kubenswrapper[4853]: I1122 07:12:18.747768 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:18 crc kubenswrapper[4853]: I1122 07:12:18.747780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:18 crc kubenswrapper[4853]: I1122 07:12:18.747726 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:18 crc kubenswrapper[4853]: E1122 07:12:18.747894 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:18 crc kubenswrapper[4853]: E1122 07:12:18.748006 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:18 crc kubenswrapper[4853]: E1122 07:12:18.748110 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:18 crc kubenswrapper[4853]: E1122 07:12:18.748194 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:19 crc kubenswrapper[4853]: I1122 07:12:19.195963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:19 crc kubenswrapper[4853]: E1122 07:12:19.196107 4853 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:19 crc kubenswrapper[4853]: E1122 07:12:19.196175 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs podName:9cc2bf97-eb39-4b0c-abda-99b49bb530fd nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.196152937 +0000 UTC m=+202.036775563 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs") pod "network-metrics-daemon-pd6gs" (UID: "9cc2bf97-eb39-4b0c-abda-99b49bb530fd") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:12:20 crc kubenswrapper[4853]: I1122 07:12:20.747053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:20 crc kubenswrapper[4853]: I1122 07:12:20.747148 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:20 crc kubenswrapper[4853]: E1122 07:12:20.747627 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:20 crc kubenswrapper[4853]: I1122 07:12:20.747302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:20 crc kubenswrapper[4853]: I1122 07:12:20.747254 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:20 crc kubenswrapper[4853]: E1122 07:12:20.747802 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:20 crc kubenswrapper[4853]: E1122 07:12:20.747866 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:20 crc kubenswrapper[4853]: E1122 07:12:20.747934 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:21 crc kubenswrapper[4853]: E1122 07:12:21.138863 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:21 crc kubenswrapper[4853]: I1122 07:12:21.747574 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:21 crc kubenswrapper[4853]: E1122 07:12:21.747717 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.305468 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.305527 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.305539 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.305556 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.305567 4853 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:12:22Z","lastTransitionTime":"2025-11-22T07:12:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.405122 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4"] Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.405607 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.408009 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.408378 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.408575 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.408639 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.423580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.423630 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.423690 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.423780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.423841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.492456 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=86.492431198 podStartE2EDuration="1m26.492431198s" podCreationTimestamp="2025-11-22 07:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.472089106 +0000 UTC m=+141.312711752" watchObservedRunningTime="2025-11-22 07:12:22.492431198 +0000 UTC m=+141.333053824" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524659 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524770 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.524742 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.525778 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.532889 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.544193 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/20e29bf9-7fa0-4bb6-9d41-a618094c46cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-z2bt4\" (UID: \"20e29bf9-7fa0-4bb6-9d41-a618094c46cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.570498 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rvgxj" podStartSLOduration=82.570468511 podStartE2EDuration="1m22.570468511s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.549495871 +0000 UTC m=+141.390118497" watchObservedRunningTime="2025-11-22 07:12:22.570468511 +0000 UTC m=+141.411091137" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.571266 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ckn94" podStartSLOduration=82.571259172 podStartE2EDuration="1m22.571259172s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.569127115 +0000 UTC m=+141.409749741" watchObservedRunningTime="2025-11-22 07:12:22.571259172 +0000 UTC m=+141.411881798" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.587455 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.587431873 podStartE2EDuration="1m23.587431873s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.586250961 +0000 UTC m=+141.426873597" watchObservedRunningTime="2025-11-22 07:12:22.587431873 +0000 UTC m=+141.428054499" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.605727 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mlpz8" podStartSLOduration=83.60570431 podStartE2EDuration="1m23.60570431s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.605407053 +0000 UTC m=+141.446029689" watchObservedRunningTime="2025-11-22 07:12:22.60570431 +0000 UTC m=+141.446326936" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.660702 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=75.660682037 podStartE2EDuration="1m15.660682037s" podCreationTimestamp="2025-11-22 07:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.659209798 +0000 UTC m=+141.499832434" watchObservedRunningTime="2025-11-22 07:12:22.660682037 +0000 UTC m=+141.501304683" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.662415 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-9nx9m" podStartSLOduration=83.662404594 podStartE2EDuration="1m23.662404594s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.645970605 +0000 UTC m=+141.486593241" watchObservedRunningTime="2025-11-22 07:12:22.662404594 +0000 UTC m=+141.503027220" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.704664 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podStartSLOduration=82.704642071 podStartE2EDuration="1m22.704642071s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.704422725 +0000 UTC m=+141.545045351" watchObservedRunningTime="2025-11-22 07:12:22.704642071 +0000 UTC m=+141.545264697" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.720453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.721049 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nhlw4" podStartSLOduration=82.721036118 podStartE2EDuration="1m22.721036118s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.720081643 +0000 UTC m=+141.560704269" watchObservedRunningTime="2025-11-22 07:12:22.721036118 +0000 UTC m=+141.561658744" Nov 22 07:12:22 crc kubenswrapper[4853]: W1122 07:12:22.737369 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20e29bf9_7fa0_4bb6_9d41_a618094c46cd.slice/crio-37c26196543b1edb5984c22e5eaf91f1dd149ecdba316583fc626d6fb05e2561 WatchSource:0}: Error finding container 37c26196543b1edb5984c22e5eaf91f1dd149ecdba316583fc626d6fb05e2561: Status 404 returned error can't find the container with id 37c26196543b1edb5984c22e5eaf91f1dd149ecdba316583fc626d6fb05e2561 Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.746307 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.746289442 podStartE2EDuration="1m27.746289442s" podCreationTimestamp="2025-11-22 07:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.74587299 +0000 UTC m=+141.586495616" watchObservedRunningTime="2025-11-22 07:12:22.746289442 +0000 UTC m=+141.586912068" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.746715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:22 crc kubenswrapper[4853]: E1122 07:12:22.746912 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.747024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.747090 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.747155 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:22 crc kubenswrapper[4853]: E1122 07:12:22.747231 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:22 crc kubenswrapper[4853]: E1122 07:12:22.747104 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:22 crc kubenswrapper[4853]: E1122 07:12:22.747309 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.758159 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=39.758139638 podStartE2EDuration="39.758139638s" podCreationTimestamp="2025-11-22 07:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.757324756 +0000 UTC m=+141.597947382" watchObservedRunningTime="2025-11-22 07:12:22.758139638 +0000 UTC m=+141.598762264" Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.957178 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" event={"ID":"20e29bf9-7fa0-4bb6-9d41-a618094c46cd","Type":"ContainerStarted","Data":"804e9b86c108f5e318b9caf94f5fd1b4f4f890cf41d32464a840d2139aed9891"} Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.957256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" event={"ID":"20e29bf9-7fa0-4bb6-9d41-a618094c46cd","Type":"ContainerStarted","Data":"37c26196543b1edb5984c22e5eaf91f1dd149ecdba316583fc626d6fb05e2561"} Nov 22 07:12:22 crc kubenswrapper[4853]: I1122 07:12:22.974398 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-z2bt4" podStartSLOduration=82.974373997 podStartE2EDuration="1m22.974373997s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:22.973520005 +0000 UTC m=+141.814142661" watchObservedRunningTime="2025-11-22 07:12:22.974373997 +0000 UTC m=+141.814996623" Nov 22 07:12:24 crc kubenswrapper[4853]: I1122 07:12:24.747137 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:24 crc kubenswrapper[4853]: I1122 07:12:24.747225 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:24 crc kubenswrapper[4853]: I1122 07:12:24.747131 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:24 crc kubenswrapper[4853]: I1122 07:12:24.747236 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:24 crc kubenswrapper[4853]: E1122 07:12:24.747322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:24 crc kubenswrapper[4853]: E1122 07:12:24.747365 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:24 crc kubenswrapper[4853]: E1122 07:12:24.747432 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:24 crc kubenswrapper[4853]: E1122 07:12:24.747485 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:26 crc kubenswrapper[4853]: E1122 07:12:26.140029 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:26 crc kubenswrapper[4853]: I1122 07:12:26.746794 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:26 crc kubenswrapper[4853]: I1122 07:12:26.746794 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:26 crc kubenswrapper[4853]: E1122 07:12:26.747012 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:26 crc kubenswrapper[4853]: I1122 07:12:26.746806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:26 crc kubenswrapper[4853]: E1122 07:12:26.747143 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:26 crc kubenswrapper[4853]: E1122 07:12:26.747240 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:26 crc kubenswrapper[4853]: I1122 07:12:26.747771 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:26 crc kubenswrapper[4853]: E1122 07:12:26.747855 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:28 crc kubenswrapper[4853]: I1122 07:12:28.747686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:28 crc kubenswrapper[4853]: I1122 07:12:28.747780 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:28 crc kubenswrapper[4853]: E1122 07:12:28.748347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:28 crc kubenswrapper[4853]: I1122 07:12:28.747779 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:28 crc kubenswrapper[4853]: I1122 07:12:28.747810 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:28 crc kubenswrapper[4853]: E1122 07:12:28.748965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:28 crc kubenswrapper[4853]: E1122 07:12:28.749277 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:28 crc kubenswrapper[4853]: E1122 07:12:28.749997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:30 crc kubenswrapper[4853]: I1122 07:12:30.746918 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:30 crc kubenswrapper[4853]: I1122 07:12:30.746982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:30 crc kubenswrapper[4853]: I1122 07:12:30.747023 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:30 crc kubenswrapper[4853]: E1122 07:12:30.747078 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:30 crc kubenswrapper[4853]: E1122 07:12:30.747199 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:30 crc kubenswrapper[4853]: I1122 07:12:30.747299 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:30 crc kubenswrapper[4853]: E1122 07:12:30.747506 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:30 crc kubenswrapper[4853]: E1122 07:12:30.747722 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:31 crc kubenswrapper[4853]: E1122 07:12:31.141505 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:32 crc kubenswrapper[4853]: I1122 07:12:32.747444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:32 crc kubenswrapper[4853]: I1122 07:12:32.747519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:32 crc kubenswrapper[4853]: I1122 07:12:32.747696 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:32 crc kubenswrapper[4853]: I1122 07:12:32.747776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:32 crc kubenswrapper[4853]: E1122 07:12:32.747852 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:32 crc kubenswrapper[4853]: E1122 07:12:32.747992 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:32 crc kubenswrapper[4853]: E1122 07:12:32.748133 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:32 crc kubenswrapper[4853]: E1122 07:12:32.748206 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:32 crc kubenswrapper[4853]: I1122 07:12:32.748217 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:32 crc kubenswrapper[4853]: E1122 07:12:32.748577 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:12:34 crc kubenswrapper[4853]: I1122 07:12:34.747725 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:34 crc kubenswrapper[4853]: I1122 07:12:34.747871 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:34 crc kubenswrapper[4853]: I1122 07:12:34.747899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:34 crc kubenswrapper[4853]: E1122 07:12:34.748001 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:34 crc kubenswrapper[4853]: I1122 07:12:34.748026 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:34 crc kubenswrapper[4853]: E1122 07:12:34.748276 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:34 crc kubenswrapper[4853]: E1122 07:12:34.748408 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:34 crc kubenswrapper[4853]: E1122 07:12:34.748614 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:36 crc kubenswrapper[4853]: E1122 07:12:36.142159 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:36 crc kubenswrapper[4853]: I1122 07:12:36.746960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:36 crc kubenswrapper[4853]: I1122 07:12:36.747071 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:36 crc kubenswrapper[4853]: I1122 07:12:36.747123 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:36 crc kubenswrapper[4853]: E1122 07:12:36.747135 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:36 crc kubenswrapper[4853]: I1122 07:12:36.747162 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:36 crc kubenswrapper[4853]: E1122 07:12:36.747303 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:36 crc kubenswrapper[4853]: E1122 07:12:36.747357 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:36 crc kubenswrapper[4853]: E1122 07:12:36.747419 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:38 crc kubenswrapper[4853]: I1122 07:12:38.747419 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:38 crc kubenswrapper[4853]: I1122 07:12:38.747523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:38 crc kubenswrapper[4853]: I1122 07:12:38.747552 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:38 crc kubenswrapper[4853]: I1122 07:12:38.747464 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:38 crc kubenswrapper[4853]: E1122 07:12:38.747707 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:38 crc kubenswrapper[4853]: E1122 07:12:38.748108 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:38 crc kubenswrapper[4853]: E1122 07:12:38.748267 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:38 crc kubenswrapper[4853]: E1122 07:12:38.748420 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:40 crc kubenswrapper[4853]: I1122 07:12:40.746976 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:40 crc kubenswrapper[4853]: I1122 07:12:40.747042 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:40 crc kubenswrapper[4853]: I1122 07:12:40.747066 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:40 crc kubenswrapper[4853]: E1122 07:12:40.747223 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:40 crc kubenswrapper[4853]: I1122 07:12:40.747298 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:40 crc kubenswrapper[4853]: E1122 07:12:40.747369 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:40 crc kubenswrapper[4853]: E1122 07:12:40.747424 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:40 crc kubenswrapper[4853]: E1122 07:12:40.747495 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:41 crc kubenswrapper[4853]: E1122 07:12:41.143069 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:42 crc kubenswrapper[4853]: I1122 07:12:42.747540 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:42 crc kubenswrapper[4853]: I1122 07:12:42.747578 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:42 crc kubenswrapper[4853]: I1122 07:12:42.747588 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:42 crc kubenswrapper[4853]: I1122 07:12:42.747655 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:42 crc kubenswrapper[4853]: E1122 07:12:42.747708 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:42 crc kubenswrapper[4853]: E1122 07:12:42.747902 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:42 crc kubenswrapper[4853]: E1122 07:12:42.748415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:42 crc kubenswrapper[4853]: E1122 07:12:42.748536 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.032493 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/1.log" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.033734 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/0.log" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.033794 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbbe3472-17cc-48dd-8e46-393b00149429" containerID="338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d" exitCode=1 Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.033843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerDied","Data":"338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d"} Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.033923 4853 scope.go:117] "RemoveContainer" containerID="5acc4b664aaf1d5b0471beb25334e65152bc3b9907f911f90e4e7899aaa1ce8d" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.034396 4853 scope.go:117] "RemoveContainer" containerID="338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d" Nov 22 07:12:44 crc kubenswrapper[4853]: E1122 07:12:44.034605 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rvgxj_openshift-multus(dbbe3472-17cc-48dd-8e46-393b00149429)\"" pod="openshift-multus/multus-rvgxj" podUID="dbbe3472-17cc-48dd-8e46-393b00149429" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.747544 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.747591 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.747591 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:44 crc kubenswrapper[4853]: I1122 07:12:44.747658 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:44 crc kubenswrapper[4853]: E1122 07:12:44.748591 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:44 crc kubenswrapper[4853]: E1122 07:12:44.748690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:44 crc kubenswrapper[4853]: E1122 07:12:44.748736 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:44 crc kubenswrapper[4853]: E1122 07:12:44.748918 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:45 crc kubenswrapper[4853]: I1122 07:12:45.039224 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/1.log" Nov 22 07:12:46 crc kubenswrapper[4853]: E1122 07:12:46.143784 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:46 crc kubenswrapper[4853]: I1122 07:12:46.747161 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:46 crc kubenswrapper[4853]: I1122 07:12:46.747177 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:46 crc kubenswrapper[4853]: I1122 07:12:46.747479 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:46 crc kubenswrapper[4853]: E1122 07:12:46.747543 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:46 crc kubenswrapper[4853]: E1122 07:12:46.747633 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:46 crc kubenswrapper[4853]: I1122 07:12:46.747640 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:46 crc kubenswrapper[4853]: E1122 07:12:46.747889 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:46 crc kubenswrapper[4853]: E1122 07:12:46.747961 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:47 crc kubenswrapper[4853]: I1122 07:12:47.748328 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:47 crc kubenswrapper[4853]: E1122 07:12:47.748532 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pqtsz_openshift-ovn-kubernetes(893f7e02-580a-4093-ab42-ea73ffffcfe6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" Nov 22 07:12:48 crc kubenswrapper[4853]: I1122 07:12:48.747779 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:48 crc kubenswrapper[4853]: I1122 07:12:48.747834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:48 crc kubenswrapper[4853]: I1122 07:12:48.747882 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:48 crc kubenswrapper[4853]: I1122 07:12:48.747819 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:48 crc kubenswrapper[4853]: E1122 07:12:48.747955 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:48 crc kubenswrapper[4853]: E1122 07:12:48.748025 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:48 crc kubenswrapper[4853]: E1122 07:12:48.748151 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:48 crc kubenswrapper[4853]: E1122 07:12:48.748465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:50 crc kubenswrapper[4853]: I1122 07:12:50.747011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:50 crc kubenswrapper[4853]: I1122 07:12:50.747050 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:50 crc kubenswrapper[4853]: E1122 07:12:50.747166 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:50 crc kubenswrapper[4853]: I1122 07:12:50.747191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:50 crc kubenswrapper[4853]: I1122 07:12:50.747011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:50 crc kubenswrapper[4853]: E1122 07:12:50.747255 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:50 crc kubenswrapper[4853]: E1122 07:12:50.747549 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:50 crc kubenswrapper[4853]: E1122 07:12:50.747886 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:51 crc kubenswrapper[4853]: E1122 07:12:51.145694 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:52 crc kubenswrapper[4853]: I1122 07:12:52.747192 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:52 crc kubenswrapper[4853]: I1122 07:12:52.747192 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:52 crc kubenswrapper[4853]: E1122 07:12:52.747701 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:52 crc kubenswrapper[4853]: I1122 07:12:52.747219 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:52 crc kubenswrapper[4853]: I1122 07:12:52.747192 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:52 crc kubenswrapper[4853]: E1122 07:12:52.747918 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:52 crc kubenswrapper[4853]: E1122 07:12:52.748098 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:52 crc kubenswrapper[4853]: E1122 07:12:52.748153 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:54 crc kubenswrapper[4853]: I1122 07:12:54.747441 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:54 crc kubenswrapper[4853]: I1122 07:12:54.747539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:54 crc kubenswrapper[4853]: I1122 07:12:54.747453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:54 crc kubenswrapper[4853]: E1122 07:12:54.747622 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:54 crc kubenswrapper[4853]: I1122 07:12:54.747456 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:54 crc kubenswrapper[4853]: E1122 07:12:54.747696 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:54 crc kubenswrapper[4853]: E1122 07:12:54.747951 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:54 crc kubenswrapper[4853]: E1122 07:12:54.748023 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:55 crc kubenswrapper[4853]: I1122 07:12:55.749043 4853 scope.go:117] "RemoveContainer" containerID="338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.075075 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/1.log" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.075139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerStarted","Data":"fc6f64218dd1813a9ea5797839ef5c7d90de0212464d216ff37e24c2c36128fe"} Nov 22 07:12:56 crc kubenswrapper[4853]: E1122 07:12:56.146706 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.747141 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.747177 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.747156 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:56 crc kubenswrapper[4853]: I1122 07:12:56.747156 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:56 crc kubenswrapper[4853]: E1122 07:12:56.747316 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:56 crc kubenswrapper[4853]: E1122 07:12:56.747373 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:56 crc kubenswrapper[4853]: E1122 07:12:56.747446 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:56 crc kubenswrapper[4853]: E1122 07:12:56.747793 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:58 crc kubenswrapper[4853]: I1122 07:12:58.747479 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:12:58 crc kubenswrapper[4853]: I1122 07:12:58.747827 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:58 crc kubenswrapper[4853]: I1122 07:12:58.747875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:12:58 crc kubenswrapper[4853]: I1122 07:12:58.747898 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:12:58 crc kubenswrapper[4853]: E1122 07:12:58.748131 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:12:58 crc kubenswrapper[4853]: I1122 07:12:58.748229 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:12:58 crc kubenswrapper[4853]: E1122 07:12:58.748307 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:12:58 crc kubenswrapper[4853]: E1122 07:12:58.748471 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:12:58 crc kubenswrapper[4853]: E1122 07:12:58.748543 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.086994 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.091651 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerStarted","Data":"d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f"} Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.092038 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.845945 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podStartSLOduration=119.845921484 podStartE2EDuration="1m59.845921484s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:12:59.11795797 +0000 UTC m=+177.958580606" watchObservedRunningTime="2025-11-22 07:12:59.845921484 +0000 UTC m=+178.686544110" Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.847332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pd6gs"] Nov 22 07:12:59 crc kubenswrapper[4853]: I1122 07:12:59.847433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:12:59 crc kubenswrapper[4853]: E1122 07:12:59.847545 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:13:00 crc kubenswrapper[4853]: I1122 07:13:00.746927 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:00 crc kubenswrapper[4853]: I1122 07:13:00.746998 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:00 crc kubenswrapper[4853]: I1122 07:13:00.747012 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:00 crc kubenswrapper[4853]: E1122 07:13:00.747137 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:00 crc kubenswrapper[4853]: E1122 07:13:00.747212 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:00 crc kubenswrapper[4853]: E1122 07:13:00.747288 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:01 crc kubenswrapper[4853]: E1122 07:13:01.148034 4853 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:13:01 crc kubenswrapper[4853]: I1122 07:13:01.748015 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:01 crc kubenswrapper[4853]: E1122 07:13:01.748188 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.747170 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.747234 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.747330 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.747441 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.747559 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.747661 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.890336 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.890676 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:15:04.890649713 +0000 UTC m=+303.731272339 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.991263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.991327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.991366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:02 crc kubenswrapper[4853]: I1122 07:13:02.991395 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991509 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991552 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991571 4853 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991568 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991578 4853 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991646 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:15:04.991621707 +0000 UTC m=+303.832244333 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991578 4853 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991667 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:15:04.991658888 +0000 UTC m=+303.832281514 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991714 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:15:04.991692229 +0000 UTC m=+303.832314885 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991598 4853 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991779 4853 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:13:02 crc kubenswrapper[4853]: E1122 07:13:02.991845 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:15:04.991827062 +0000 UTC m=+303.832449728 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:13:03 crc kubenswrapper[4853]: I1122 07:13:03.747344 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:03 crc kubenswrapper[4853]: E1122 07:13:03.747532 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:13:04 crc kubenswrapper[4853]: I1122 07:13:04.747740 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:04 crc kubenswrapper[4853]: I1122 07:13:04.747795 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:04 crc kubenswrapper[4853]: E1122 07:13:04.748589 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:13:04 crc kubenswrapper[4853]: I1122 07:13:04.747847 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:04 crc kubenswrapper[4853]: E1122 07:13:04.748762 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:13:04 crc kubenswrapper[4853]: E1122 07:13:04.748997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:13:05 crc kubenswrapper[4853]: I1122 07:13:05.746899 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:05 crc kubenswrapper[4853]: E1122 07:13:05.748312 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pd6gs" podUID="9cc2bf97-eb39-4b0c-abda-99b49bb530fd" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.747488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.747614 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.747719 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.749648 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.749905 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.750818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 07:13:06 crc kubenswrapper[4853]: I1122 07:13:06.751014 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 07:13:07 crc kubenswrapper[4853]: I1122 07:13:07.748088 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:07 crc kubenswrapper[4853]: I1122 07:13:07.751639 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 07:13:07 crc kubenswrapper[4853]: I1122 07:13:07.752975 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.189024 4853 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.228895 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xmnqz"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.229653 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.230132 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.230436 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.231080 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.231968 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.232771 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.233209 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.237898 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.237954 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238080 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238205 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.237949 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238268 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238322 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238395 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238425 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238483 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238735 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238911 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.238611 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.239679 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rh6fb"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.240417 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.241096 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.241634 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.242314 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.245527 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.246362 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.246707 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.247432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.252176 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dbd5p"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.252737 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hpb7j"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.253065 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.253523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.253686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.253908 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.257070 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.257290 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.257607 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.258768 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.263675 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.282996 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psplq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.283698 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.288591 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.290722 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.293325 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.310323 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.311075 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.311468 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.312098 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313414 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313544 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313632 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313813 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313864 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313898 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314000 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314048 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314166 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314183 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314345 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314379 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314513 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.314653 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.313815 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.315065 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.315118 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.315886 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.316089 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.316263 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.316554 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.316675 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317265 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317306 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317521 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317552 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317645 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317685 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317781 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317841 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317882 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317898 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317983 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317991 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318041 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318078 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318141 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317790 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318163 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.317983 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318145 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318344 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318390 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318485 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.318489 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.323202 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.323442 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.324121 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.324268 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.326303 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.326352 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.326405 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.326568 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.328828 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.329565 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.330238 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.331471 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h486l"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.331927 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332322 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332497 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-image-import-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332596 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-audit-policies\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332621 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332648 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332702 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332733 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90eeaa0a-6939-40a5-821c-82579c812f3b-serving-cert\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332775 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332829 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df1a0b5-a039-4098-a88e-96015dcf1406-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332853 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df1a0b5-a039-4098-a88e-96015dcf1406-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332877 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glccb\" (UniqueName: \"kubernetes.io/projected/341b4f0c-09ee-4297-99c4-b8e6334de4ed-kube-api-access-glccb\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332929 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-config\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.332952 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-client\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333026 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333092 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333275 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333091 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333565 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333583 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsjt\" (UniqueName: \"kubernetes.io/projected/bcd72804-cd09-4ec3-ae4a-f539958eb90c-kube-api-access-kjsjt\") pod \"downloads-7954f5f757-hpb7j\" (UID: \"bcd72804-cd09-4ec3-ae4a-f539958eb90c\") " pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333641 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333687 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d10e537-edf1-40b9-a8a7-038237e48834-audit-dir\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333707 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2vl\" (UniqueName: \"kubernetes.io/projected/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-kube-api-access-kq2vl\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333774 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e118bf40-4574-410f-bb2f-b5eb601974e5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2715796f-e4b0-4400-a02c-a485171a9858-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.333854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-images\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334122 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-client\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334226 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334335 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334364 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fd9n\" (UniqueName: \"kubernetes.io/projected/0d10e537-edf1-40b9-a8a7-038237e48834-kube-api-access-9fd9n\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334393 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-trusted-ca\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334420 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-node-pullsecrets\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334445 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334514 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-audit\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334555 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-service-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334657 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhvpp\" (UniqueName: \"kubernetes.io/projected/2454431f-55ed-4abb-b70f-9382007e9026-kube-api-access-nhvpp\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334702 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334741 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334783 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hhbn\" (UniqueName: \"kubernetes.io/projected/e118bf40-4574-410f-bb2f-b5eb601974e5-kube-api-access-4hhbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334855 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5444051-a1d3-4854-8b30-367e3fd2c123-serving-cert\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334896 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334935 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-auth-proxy-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/341b4f0c-09ee-4297-99c4-b8e6334de4ed-machine-approver-tls\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2715796f-e4b0-4400-a02c-a485171a9858-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.334998 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-serving-cert\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335049 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-config\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335094 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-encryption-config\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335115 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq7n2\" (UniqueName: \"kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335313 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-audit-dir\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335376 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhxxn\" (UniqueName: \"kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335399 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslkp\" (UniqueName: \"kubernetes.io/projected/f5444051-a1d3-4854-8b30-367e3fd2c123-kube-api-access-lslkp\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-encryption-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335461 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335488 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccw22\" (UniqueName: \"kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335506 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335548 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2454431f-55ed-4abb-b70f-9382007e9026-serving-cert\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335624 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335658 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-config\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335678 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9hdd\" (UniqueName: \"kubernetes.io/projected/05b7fb71-56a6-4875-a680-995a1a2194d6-kube-api-access-f9hdd\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5235a6-36eb-42fc-8a56-d8464014b881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335772 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt47\" (UniqueName: \"kubernetes.io/projected/4df1a0b5-a039-4098-a88e-96015dcf1406-kube-api-access-rtt47\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118bf40-4574-410f-bb2f-b5eb601974e5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335824 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335870 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qws8t\" (UniqueName: \"kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.335903 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2454431f-55ed-4abb-b70f-9382007e9026-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.336042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52r87\" (UniqueName: \"kubernetes.io/projected/90eeaa0a-6939-40a5-821c-82579c812f3b-kube-api-access-52r87\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.337659 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.336067 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ctw\" (UniqueName: \"kubernetes.io/projected/ad5235a6-36eb-42fc-8a56-d8464014b881-kube-api-access-c7ctw\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.338155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-serving-cert\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.338183 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.338224 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vfnr\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-kube-api-access-2vfnr\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.338729 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.339107 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.339204 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.339433 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.339806 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.339857 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.341229 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.355017 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-696ts"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.355778 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.355858 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.356069 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n6vz6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.356483 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.356998 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.357152 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.357182 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.357369 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.357631 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.365919 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.367074 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.367862 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.368354 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.394116 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjq85"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.394968 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395023 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395214 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395373 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395614 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395857 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.395979 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.396102 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.396206 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.396679 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.396804 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.396955 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.397571 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.397792 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.400119 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.400319 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.403565 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.403714 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.404053 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.404074 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.405000 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.405699 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zqjqq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.406267 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.406546 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.407555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.407611 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.407779 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.409455 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.410362 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.410473 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.409481 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.412683 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.414303 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9kg95"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.414894 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.415200 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.416208 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.417288 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.417861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.418501 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.419451 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.420699 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.421609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.423840 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h2jnh"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.424472 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.424862 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.425318 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.426467 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.427112 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.430246 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rh6fb"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.431850 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.433086 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.434027 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xmnqz"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.436639 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bnk7x"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5444051-a1d3-4854-8b30-367e3fd2c123-serving-cert\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440306 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dbd5p"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440342 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42e0f31e-1622-4388-9852-f22966d156f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440370 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83b3203a-f1e8-4d8e-8c42-4932026537ee-proxy-tls\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4t8\" (UniqueName: \"kubernetes.io/projected/e2ccfb3a-48b6-4367-abae-d5ac6d053f77-kube-api-access-9l4t8\") pod \"migrator-59844c95c7-jfgsj\" (UID: \"e2ccfb3a-48b6-4367-abae-d5ac6d053f77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440461 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440512 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be3f473-ecf9-464d-b363-f28c82456652-config\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2715796f-e4b0-4400-a02c-a485171a9858-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-serving-cert\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440608 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-auth-proxy-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440676 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/341b4f0c-09ee-4297-99c4-b8e6334de4ed-machine-approver-tls\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440696 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-config\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-encryption-config\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzh8s\" (UniqueName: \"kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-audit-dir\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440855 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq7n2\" (UniqueName: \"kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440941 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhxxn\" (UniqueName: \"kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lslkp\" (UniqueName: \"kubernetes.io/projected/f5444051-a1d3-4854-8b30-367e3fd2c123-kube-api-access-lslkp\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.440982 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/606305dc-db05-45e6-8409-fdb1ca8ca988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441033 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-encryption-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441094 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccw22\" (UniqueName: \"kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441116 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441136 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2454431f-55ed-4abb-b70f-9382007e9026-serving-cert\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-profile-collector-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441257 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-config\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441298 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9hdd\" (UniqueName: \"kubernetes.io/projected/05b7fb71-56a6-4875-a680-995a1a2194d6-kube-api-access-f9hdd\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441339 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5235a6-36eb-42fc-8a56-d8464014b881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441361 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtt47\" (UniqueName: \"kubernetes.io/projected/4df1a0b5-a039-4098-a88e-96015dcf1406-kube-api-access-rtt47\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441446 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118bf40-4574-410f-bb2f-b5eb601974e5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441465 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5mxk\" (UniqueName: \"kubernetes.io/projected/83b3203a-f1e8-4d8e-8c42-4932026537ee-kube-api-access-h5mxk\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441511 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2454431f-55ed-4abb-b70f-9382007e9026-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441529 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441568 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441588 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qws8t\" (UniqueName: \"kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441605 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f75d360f-0e31-40e3-8b5d-d51934525efb-metrics-tls\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52r87\" (UniqueName: \"kubernetes.io/projected/90eeaa0a-6939-40a5-821c-82579c812f3b-kube-api-access-52r87\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441667 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7ctw\" (UniqueName: \"kubernetes.io/projected/ad5235a6-36eb-42fc-8a56-d8464014b881-kube-api-access-c7ctw\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-serving-cert\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441770 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vfnr\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-kube-api-access-2vfnr\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmchs\" (UniqueName: \"kubernetes.io/projected/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-kube-api-access-zmchs\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441851 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-image-import-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441867 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-audit-policies\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441920 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90eeaa0a-6939-40a5-821c-82579c812f3b-serving-cert\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441937 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.441988 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442021 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442039 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df1a0b5-a039-4098-a88e-96015dcf1406-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442071 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glccb\" (UniqueName: \"kubernetes.io/projected/341b4f0c-09ee-4297-99c4-b8e6334de4ed-kube-api-access-glccb\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqq4\" (UniqueName: \"kubernetes.io/projected/f75d360f-0e31-40e3-8b5d-d51934525efb-kube-api-access-rdqq4\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442110 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df1a0b5-a039-4098-a88e-96015dcf1406-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442131 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-config\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-client\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442168 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc8r\" (UniqueName: \"kubernetes.io/projected/5aa7d496-b98f-4b8f-8974-1bd30f617280-kube-api-access-gfc8r\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442189 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsjt\" (UniqueName: \"kubernetes.io/projected/bcd72804-cd09-4ec3-ae4a-f539958eb90c-kube-api-access-kjsjt\") pod \"downloads-7954f5f757-hpb7j\" (UID: \"bcd72804-cd09-4ec3-ae4a-f539958eb90c\") " pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442230 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e0f31e-1622-4388-9852-f22966d156f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d10e537-edf1-40b9-a8a7-038237e48834-audit-dir\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442341 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sttx\" (UniqueName: \"kubernetes.io/projected/42e0f31e-1622-4388-9852-f22966d156f4-kube-api-access-5sttx\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442370 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxpx\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-kube-api-access-bmxpx\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442390 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2vl\" (UniqueName: \"kubernetes.io/projected/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-kube-api-access-kq2vl\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442423 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442439 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e118bf40-4574-410f-bb2f-b5eb601974e5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f79mr\" (UniqueName: \"kubernetes.io/projected/52bdc241-d70a-4a84-adc2-618dc90b8886-kube-api-access-f79mr\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442474 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-images\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-client\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442529 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be3f473-ecf9-464d-b363-f28c82456652-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2715796f-e4b0-4400-a02c-a485171a9858-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442638 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be3f473-ecf9-464d-b363-f28c82456652-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2715796f-e4b0-4400-a02c-a485171a9858-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-node-pullsecrets\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442692 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442732 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fd9n\" (UniqueName: \"kubernetes.io/projected/0d10e537-edf1-40b9-a8a7-038237e48834-kube-api-access-9fd9n\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-trusted-ca\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-srv-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83b3203a-f1e8-4d8e-8c42-4932026537ee-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442834 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-audit\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-service-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442905 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442895 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjq85"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442930 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhvpp\" (UniqueName: \"kubernetes.io/projected/2454431f-55ed-4abb-b70f-9382007e9026-kube-api-access-nhvpp\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442950 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.442993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.443013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hhbn\" (UniqueName: \"kubernetes.io/projected/e118bf40-4574-410f-bb2f-b5eb601974e5-kube-api-access-4hhbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.443036 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.443055 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.443073 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.443922 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.444406 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.444420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.446050 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n6vz6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.446368 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-t9fmp"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.446397 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.446457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/2454431f-55ed-4abb-b70f-9382007e9026-available-featuregates\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.448649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.478521 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-serving-cert\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.478659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.478923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-encryption-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.479445 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e118bf40-4574-410f-bb2f-b5eb601974e5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.479991 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-image-import-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.481317 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.482057 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h486l"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.482879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.483646 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ad5235a6-36eb-42fc-8a56-d8464014b881-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.483928 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.484079 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-serving-cert\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.484681 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.485837 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-config\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.486875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.487292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.489145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2454431f-55ed-4abb-b70f-9382007e9026-serving-cert\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.489873 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-config\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.490147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-client\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.490954 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5444051-a1d3-4854-8b30-367e3fd2c123-serving-cert\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.491898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.492400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.493800 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90eeaa0a-6939-40a5-821c-82579c812f3b-serving-cert\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.496691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.502078 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.502864 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.503291 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-audit-dir\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.505001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05b7fb71-56a6-4875-a680-995a1a2194d6-node-pullsecrets\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.506315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.507055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.508022 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.510089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-etcd-client\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.513123 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.513237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.513653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.516176 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.517398 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.518614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.521036 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-audit\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.521456 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-config\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.521729 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.523008 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0d10e537-edf1-40b9-a8a7-038237e48834-audit-dir\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.523503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4df1a0b5-a039-4098-a88e-96015dcf1406-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.524050 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.524207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.524280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.524849 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.525115 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0d10e537-edf1-40b9-a8a7-038237e48834-audit-policies\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.525542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.526542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.527109 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05b7fb71-56a6-4875-a680-995a1a2194d6-etcd-serving-ca\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.527349 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90eeaa0a-6939-40a5-821c-82579c812f3b-service-ca-bundle\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.527648 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.528216 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.528724 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.528979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.529349 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e118bf40-4574-410f-bb2f-b5eb601974e5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.529613 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-config\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.529644 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-images\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.530365 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.530570 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.530631 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/341b4f0c-09ee-4297-99c4-b8e6334de4ed-auth-proxy-config\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.531226 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0d10e537-edf1-40b9-a8a7-038237e48834-encryption-config\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.531311 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psplq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.531587 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/341b4f0c-09ee-4297-99c4-b8e6334de4ed-machine-approver-tls\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.532955 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f5444051-a1d3-4854-8b30-367e3fd2c123-trusted-ca\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.533073 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.533989 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4df1a0b5-a039-4098-a88e-96015dcf1406-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.534314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2715796f-e4b0-4400-a02c-a485171a9858-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.534735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.534795 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-696ts"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.536682 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.538820 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.540336 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.541330 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.544738 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdqq4\" (UniqueName: \"kubernetes.io/projected/f75d360f-0e31-40e3-8b5d-d51934525efb-kube-api-access-rdqq4\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545203 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfc8r\" (UniqueName: \"kubernetes.io/projected/5aa7d496-b98f-4b8f-8974-1bd30f617280-kube-api-access-gfc8r\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545241 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e0f31e-1622-4388-9852-f22966d156f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545280 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sttx\" (UniqueName: \"kubernetes.io/projected/42e0f31e-1622-4388-9852-f22966d156f4-kube-api-access-5sttx\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmxpx\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-kube-api-access-bmxpx\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.545674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f79mr\" (UniqueName: \"kubernetes.io/projected/52bdc241-d70a-4a84-adc2-618dc90b8886-kube-api-access-f79mr\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.546593 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be3f473-ecf9-464d-b363-f28c82456652-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.546946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547117 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpb7j"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be3f473-ecf9-464d-b363-f28c82456652-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547461 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-srv-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83b3203a-f1e8-4d8e-8c42-4932026537ee-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547627 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42e0f31e-1622-4388-9852-f22966d156f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83b3203a-f1e8-4d8e-8c42-4932026537ee-proxy-tls\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4t8\" (UniqueName: \"kubernetes.io/projected/e2ccfb3a-48b6-4367-abae-d5ac6d053f77-kube-api-access-9l4t8\") pod \"migrator-59844c95c7-jfgsj\" (UID: \"e2ccfb3a-48b6-4367-abae-d5ac6d053f77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be3f473-ecf9-464d-b363-f28c82456652-config\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzh8s\" (UniqueName: \"kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/606305dc-db05-45e6-8409-fdb1ca8ca988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547860 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547882 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-profile-collector-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547928 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547950 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5mxk\" (UniqueName: \"kubernetes.io/projected/83b3203a-f1e8-4d8e-8c42-4932026537ee-kube-api-access-h5mxk\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.547983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f75d360f-0e31-40e3-8b5d-d51934525efb-metrics-tls\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.548022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmchs\" (UniqueName: \"kubernetes.io/projected/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-kube-api-access-zmchs\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.548044 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.549255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be3f473-ecf9-464d-b363-f28c82456652-config\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.549316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/83b3203a-f1e8-4d8e-8c42-4932026537ee-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.549316 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.552491 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-99wl6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.552790 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.553397 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.554524 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w6jpc"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.555999 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be3f473-ecf9-464d-b363-f28c82456652-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.556544 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t9fmp"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.556727 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.557831 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.560183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.561615 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zqjqq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.563016 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9kg95"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.564225 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-99wl6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.566668 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.568917 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.569038 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.569919 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.570456 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.572213 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.572595 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.572640 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w6jpc"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.573682 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.575076 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.576163 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.577616 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.578617 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.593104 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.602528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.612535 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.632945 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.652672 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.672909 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.692354 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.712732 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.733538 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.758534 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.762334 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.780884 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.790141 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.791987 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.812626 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.832294 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.841940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/83b3203a-f1e8-4d8e-8c42-4932026537ee-proxy-tls\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.853628 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.873088 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.893135 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.913150 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.916499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42e0f31e-1622-4388-9852-f22966d156f4-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.933359 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.943080 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42e0f31e-1622-4388-9852-f22966d156f4-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.953723 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.973458 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 22 07:13:13 crc kubenswrapper[4853]: I1122 07:13:13.993407 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.013181 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.033245 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.052876 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.061302 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.062164 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-profile-collector-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.073804 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.081484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/52bdc241-d70a-4a84-adc2-618dc90b8886-srv-cert\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.093494 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.116317 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.132725 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.153385 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.172670 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.193080 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.212793 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.232814 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.246680 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f75d360f-0e31-40e3-8b5d-d51934525efb-metrics-tls\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.253141 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.273569 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.293541 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.312853 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.352691 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.372513 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.392912 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.412720 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.431200 4853 request.go:700] Waited for 1.015680408s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&limit=500&resourceVersion=0 Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.433831 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.452636 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.472394 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.493697 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.512917 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.533165 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548222 4853 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548255 4853 secret.go:188] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548296 4853 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548339 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert podName:5aa7d496-b98f-4b8f-8974-1bd30f617280 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:15.048312041 +0000 UTC m=+193.888934667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert") pod "catalog-operator-68c6474976-flwkb" (UID: "5aa7d496-b98f-4b8f-8974-1bd30f617280") : failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548359 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config podName:606305dc-db05-45e6-8409-fdb1ca8ca988 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:15.048351342 +0000 UTC m=+193.888973968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-rv68m" (UID: "606305dc-db05-45e6-8409-fdb1ca8ca988") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548373 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls podName:3960cd0a-8f4a-44de-a022-3858e1176a99 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:15.048366552 +0000 UTC m=+193.888989168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls") pod "ingress-operator-5b745b69d9-djgfn" (UID: "3960cd0a-8f4a-44de-a022-3858e1176a99") : failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548458 4853 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548517 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca podName:3960cd0a-8f4a-44de-a022-3858e1176a99 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:15.048488016 +0000 UTC m=+193.889110642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca") pod "ingress-operator-5b745b69d9-djgfn" (UID: "3960cd0a-8f4a-44de-a022-3858e1176a99") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548545 4853 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: E1122 07:13:14.548577 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert podName:606305dc-db05-45e6-8409-fdb1ca8ca988 nodeName:}" failed. No retries permitted until 2025-11-22 07:13:15.048569398 +0000 UTC m=+193.889192024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-rv68m" (UID: "606305dc-db05-45e6-8409-fdb1ca8ca988") : failed to sync secret cache: timed out waiting for the condition Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.553427 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.571741 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.592938 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.613357 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.633174 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.653002 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.672198 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.692939 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.713677 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.732591 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.753054 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.779025 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.792357 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.813870 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.832586 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.860005 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.872397 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.892967 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.912664 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.932557 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.952375 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.973054 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 07:13:14 crc kubenswrapper[4853]: I1122 07:13:14.993232 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.014135 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.053348 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.072856 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.076448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.076532 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.076622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.076672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.076696 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.078677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/606305dc-db05-45e6-8409-fdb1ca8ca988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.078817 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3960cd0a-8f4a-44de-a022-3858e1176a99-trusted-ca\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.081212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5aa7d496-b98f-4b8f-8974-1bd30f617280-srv-cert\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.081260 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3960cd0a-8f4a-44de-a022-3858e1176a99-metrics-tls\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.081334 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/606305dc-db05-45e6-8409-fdb1ca8ca988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.092701 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.130453 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vfnr\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-kube-api-access-2vfnr\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.151083 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtt47\" (UniqueName: \"kubernetes.io/projected/4df1a0b5-a039-4098-a88e-96015dcf1406-kube-api-access-rtt47\") pod \"openshift-apiserver-operator-796bbdcf4f-kc8zd\" (UID: \"4df1a0b5-a039-4098-a88e-96015dcf1406\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.168501 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lslkp\" (UniqueName: \"kubernetes.io/projected/f5444051-a1d3-4854-8b30-367e3fd2c123-kube-api-access-lslkp\") pod \"console-operator-58897d9998-psplq\" (UID: \"f5444051-a1d3-4854-8b30-367e3fd2c123\") " pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.173301 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.192439 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.213428 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.232626 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.268507 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccw22\" (UniqueName: \"kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22\") pod \"controller-manager-879f6c89f-9qkgc\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.288503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9hdd\" (UniqueName: \"kubernetes.io/projected/05b7fb71-56a6-4875-a680-995a1a2194d6-kube-api-access-f9hdd\") pod \"apiserver-76f77b778f-xmnqz\" (UID: \"05b7fb71-56a6-4875-a680-995a1a2194d6\") " pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.309159 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glccb\" (UniqueName: \"kubernetes.io/projected/341b4f0c-09ee-4297-99c4-b8e6334de4ed-kube-api-access-glccb\") pod \"machine-approver-56656f9798-pbzlk\" (UID: \"341b4f0c-09ee-4297-99c4-b8e6334de4ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.329218 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsjt\" (UniqueName: \"kubernetes.io/projected/bcd72804-cd09-4ec3-ae4a-f539958eb90c-kube-api-access-kjsjt\") pod \"downloads-7954f5f757-hpb7j\" (UID: \"bcd72804-cd09-4ec3-ae4a-f539958eb90c\") " pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.334309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.348155 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.349903 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhxxn\" (UniqueName: \"kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn\") pod \"console-f9d7485db-5nds5\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.364133 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.368237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hhbn\" (UniqueName: \"kubernetes.io/projected/e118bf40-4574-410f-bb2f-b5eb601974e5-kube-api-access-4hhbn\") pod \"openshift-controller-manager-operator-756b6f6bc6-5pn5x\" (UID: \"e118bf40-4574-410f-bb2f-b5eb601974e5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.381609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.392332 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52r87\" (UniqueName: \"kubernetes.io/projected/90eeaa0a-6939-40a5-821c-82579c812f3b-kube-api-access-52r87\") pod \"authentication-operator-69f744f599-dbd5p\" (UID: \"90eeaa0a-6939-40a5-821c-82579c812f3b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.406888 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fd9n\" (UniqueName: \"kubernetes.io/projected/0d10e537-edf1-40b9-a8a7-038237e48834-kube-api-access-9fd9n\") pod \"apiserver-7bbb656c7d-dvcg6\" (UID: \"0d10e537-edf1-40b9-a8a7-038237e48834\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.418350 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.434802 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.435910 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qws8t\" (UniqueName: \"kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t\") pod \"oauth-openshift-558db77b4-9qfvq\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.450387 4853 request.go:700] Waited for 1.919653161s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.451236 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2715796f-e4b0-4400-a02c-a485171a9858-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wzbj5\" (UID: \"2715796f-e4b0-4400-a02c-a485171a9858\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.477006 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7ctw\" (UniqueName: \"kubernetes.io/projected/ad5235a6-36eb-42fc-8a56-d8464014b881-kube-api-access-c7ctw\") pod \"cluster-samples-operator-665b6dd947-cll6l\" (UID: \"ad5235a6-36eb-42fc-8a56-d8464014b881\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.506573 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2vl\" (UniqueName: \"kubernetes.io/projected/065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c-kube-api-access-kq2vl\") pod \"machine-api-operator-5694c8668f-rh6fb\" (UID: \"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.524035 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.524515 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq7n2\" (UniqueName: \"kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2\") pod \"route-controller-manager-6576b87f9c-p5l64\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.539409 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhvpp\" (UniqueName: \"kubernetes.io/projected/2454431f-55ed-4abb-b70f-9382007e9026-kube-api-access-nhvpp\") pod \"openshift-config-operator-7777fb866f-bqk2r\" (UID: \"2454431f-55ed-4abb-b70f-9382007e9026\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.545719 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.551142 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdqq4\" (UniqueName: \"kubernetes.io/projected/f75d360f-0e31-40e3-8b5d-d51934525efb-kube-api-access-rdqq4\") pod \"dns-operator-744455d44c-zqjqq\" (UID: \"f75d360f-0e31-40e3-8b5d-d51934525efb\") " pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.579551 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfc8r\" (UniqueName: \"kubernetes.io/projected/5aa7d496-b98f-4b8f-8974-1bd30f617280-kube-api-access-gfc8r\") pod \"catalog-operator-68c6474976-flwkb\" (UID: \"5aa7d496-b98f-4b8f-8974-1bd30f617280\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.594933 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmxpx\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-kube-api-access-bmxpx\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.613361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sttx\" (UniqueName: \"kubernetes.io/projected/42e0f31e-1622-4388-9852-f22966d156f4-kube-api-access-5sttx\") pod \"kube-storage-version-migrator-operator-b67b599dd-cfkqt\" (UID: \"42e0f31e-1622-4388-9852-f22966d156f4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.613590 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-psplq"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.621525 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.625783 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5444051_a1d3_4854_8b30_367e3fd2c123.slice/crio-2cb80d999e3d6aed561f0fedcb8da45c0f4b8d545c7863887d320732a4105e97 WatchSource:0}: Error finding container 2cb80d999e3d6aed561f0fedcb8da45c0f4b8d545c7863887d320732a4105e97: Status 404 returned error can't find the container with id 2cb80d999e3d6aed561f0fedcb8da45c0f4b8d545c7863887d320732a4105e97 Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.629666 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.632202 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f79mr\" (UniqueName: \"kubernetes.io/projected/52bdc241-d70a-4a84-adc2-618dc90b8886-kube-api-access-f79mr\") pod \"olm-operator-6b444d44fb-bzv6w\" (UID: \"52bdc241-d70a-4a84-adc2-618dc90b8886\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.644785 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.652059 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4be3f473-ecf9-464d-b363-f28c82456652-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-zxl69\" (UID: \"4be3f473-ecf9-464d-b363-f28c82456652\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.653775 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.676088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3960cd0a-8f4a-44de-a022-3858e1176a99-bound-sa-token\") pod \"ingress-operator-5b745b69d9-djgfn\" (UID: \"3960cd0a-8f4a-44de-a022-3858e1176a99\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.757431 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmchs\" (UniqueName: \"kubernetes.io/projected/63e8cbe0-5a31-49f6-bd66-f04a2eb641ec-kube-api-access-zmchs\") pod \"package-server-manager-789f6589d5-2stwm\" (UID: \"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.772930 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.777605 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.779282 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5mxk\" (UniqueName: \"kubernetes.io/projected/83b3203a-f1e8-4d8e-8c42-4932026537ee-kube-api-access-h5mxk\") pod \"machine-config-controller-84d6567774-696ts\" (UID: \"83b3203a-f1e8-4d8e-8c42-4932026537ee\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.793937 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.813154 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.832110 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.853160 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.867873 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xmnqz"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.874315 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.875238 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpb7j"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.877200 4853 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.877289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.893264 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcd72804_cd09_4ec3_ae4a_f539958eb90c.slice/crio-8e6bc96771059e2c8f799b9c53e0dfa76ea9448710ffb5d921e1b8ba117e3e49 WatchSource:0}: Error finding container 8e6bc96771059e2c8f799b9c53e0dfa76ea9448710ffb5d921e1b8ba117e3e49: Status 404 returned error can't find the container with id 8e6bc96771059e2c8f799b9c53e0dfa76ea9448710ffb5d921e1b8ba117e3e49 Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.896600 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b7fb71_56a6_4875_a680_995a1a2194d6.slice/crio-7416011503e97e89b69c81ff96b6f4c8af7512676a3abc1509227b2194c8ff3f WatchSource:0}: Error finding container 7416011503e97e89b69c81ff96b6f4c8af7512676a3abc1509227b2194c8ff3f: Status 404 returned error can't find the container with id 7416011503e97e89b69c81ff96b6f4c8af7512676a3abc1509227b2194c8ff3f Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.897788 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90b00b61_4e40_4e08_b164_643608e91dd0.slice/crio-4246803e0ef4ed600ec0927d6385f1e8e217847eef221a71eb58cb0a20a28737 WatchSource:0}: Error finding container 4246803e0ef4ed600ec0927d6385f1e8e217847eef221a71eb58cb0a20a28737: Status 404 returned error can't find the container with id 4246803e0ef4ed600ec0927d6385f1e8e217847eef221a71eb58cb0a20a28737 Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.898215 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d3c61d5_518d_443e_beb3_a0bf27a07be4.slice/crio-b17a7802bc0213ba96a5bca0eb6b8a0c92a507db5071f8650d60e7b03c987d3a WatchSource:0}: Error finding container b17a7802bc0213ba96a5bca0eb6b8a0c92a507db5071f8650d60e7b03c987d3a: Status 404 returned error can't find the container with id b17a7802bc0213ba96a5bca0eb6b8a0c92a507db5071f8650d60e7b03c987d3a Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.933108 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.933191 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.952544 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.961655 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.973507 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.983526 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" Nov 22 07:13:15 crc kubenswrapper[4853]: W1122 07:13:15.984154 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d10e537_edf1_40b9_a8a7_038237e48834.slice/crio-5ce3f5d4bde270d920e3cddbc57fd997c4571efd4f80eea42ebd6e80185bb58b WatchSource:0}: Error finding container 5ce3f5d4bde270d920e3cddbc57fd997c4571efd4f80eea42ebd6e80185bb58b: Status 404 returned error can't find the container with id 5ce3f5d4bde270d920e3cddbc57fd997c4571efd4f80eea42ebd6e80185bb58b Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.991573 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.992395 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dbd5p"] Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5nl\" (UniqueName: \"kubernetes.io/projected/f5873a23-2127-4288-8d68-6d12756368b5-kube-api-access-bj5nl\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996241 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-serving-cert\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-metrics-certs\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996307 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-webhook-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996330 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzjf\" (UniqueName: \"kubernetes.io/projected/7e73b9e6-c1a8-411b-9360-32dc388d76f1-kube-api-access-ctzjf\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996367 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996426 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl227\" (UniqueName: \"kubernetes.io/projected/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-kube-api-access-fl227\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-serving-cert\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:15 crc kubenswrapper[4853]: I1122 07:13:15.996514 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k86b\" (UniqueName: \"kubernetes.io/projected/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-kube-api-access-6k86b\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eb41230-c219-4968-a240-36db37f3d772-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996596 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996612 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfmt7\" (UniqueName: \"kubernetes.io/projected/6c313448-9287-4014-b36e-ae4e14b9ee4e-kube-api-access-gfmt7\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996628 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/472b3cc8-386e-4828-a725-263057fb299b-tmpfs\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996644 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.996664 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.997528 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.997867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2rbq\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.997984 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-stats-auth\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.998014 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvjc\" (UniqueName: \"kubernetes.io/projected/472b3cc8-386e-4828-a725-263057fb299b-kube-api-access-xhvjc\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.998580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg24r\" (UniqueName: \"kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.999414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.999659 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.999692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-images\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:15.999994 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-key\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-default-certificate\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000123 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-client\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000163 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5873a23-2127-4288-8d68-6d12756368b5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/913eeba3-a280-4ffa-a61a-febff59fcc2e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dznvk\" (UniqueName: \"kubernetes.io/projected/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-kube-api-access-dznvk\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2smdw\" (UniqueName: \"kubernetes.io/projected/2eb41230-c219-4968-a240-36db37f3d772-kube-api-access-2smdw\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-cabundle\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-proxy-tls\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-apiservice-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000787 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913eeba3-a280-4ffa-a61a-febff59fcc2e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-config\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000910 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-config\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.000949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913eeba3-a280-4ffa-a61a-febff59fcc2e-config\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.001039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-service-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.001085 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.001108 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c313448-9287-4014-b36e-ae4e14b9ee4e-service-ca-bundle\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.009454 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.509426043 +0000 UTC m=+195.350048669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.014305 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/606305dc-db05-45e6-8409-fdb1ca8ca988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rv68m\" (UID: \"606305dc-db05-45e6-8409-fdb1ca8ca988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.016467 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.018959 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.034134 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.042676 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.052609 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.053902 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.059409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.061137 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.073124 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.076284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.093324 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.102264 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.102875 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-config\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.103134 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.603109496 +0000 UTC m=+195.443732122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-config\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913eeba3-a280-4ffa-a61a-febff59fcc2e-config\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-socket-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103295 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-service-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c313448-9287-4014-b36e-ae4e14b9ee4e-service-ca-bundle\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj5nl\" (UniqueName: \"kubernetes.io/projected/f5873a23-2127-4288-8d68-6d12756368b5-kube-api-access-bj5nl\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103388 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-serving-cert\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103409 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-metrics-certs\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-webhook-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-certs\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wsbn\" (UniqueName: \"kubernetes.io/projected/9546ad13-1c91-495a-865b-b3396a94e17e-kube-api-access-6wsbn\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzjf\" (UniqueName: \"kubernetes.io/projected/7e73b9e6-c1a8-411b-9360-32dc388d76f1-kube-api-access-ctzjf\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103577 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl227\" (UniqueName: \"kubernetes.io/projected/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-kube-api-access-fl227\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103714 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.103945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104080 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-registration-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104126 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1a696f2-274f-4b1c-9212-fc280920f69f-config-volume\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104280 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-serving-cert\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k86b\" (UniqueName: \"kubernetes.io/projected/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-kube-api-access-6k86b\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104529 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eb41230-c219-4968-a240-36db37f3d772-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104555 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104619 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfmt7\" (UniqueName: \"kubernetes.io/projected/6c313448-9287-4014-b36e-ae4e14b9ee4e-kube-api-access-gfmt7\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104640 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/472b3cc8-386e-4828-a725-263057fb299b-tmpfs\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104658 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104693 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2rbq\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104958 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-stats-auth\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.104993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhvjc\" (UniqueName: \"kubernetes.io/projected/472b3cc8-386e-4828-a725-263057fb299b-kube-api-access-xhvjc\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105373 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d08723-fac8-48ca-9255-848a0e659721-cert\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bjn\" (UniqueName: \"kubernetes.io/projected/87d08723-fac8-48ca-9255-848a0e659721-kube-api-access-86bjn\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg24r\" (UniqueName: \"kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/472b3cc8-386e-4828-a725-263057fb299b-tmpfs\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105775 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105874 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.105903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-images\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.106360 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-key\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.106392 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-default-certificate\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.106542 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-client\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.106906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5873a23-2127-4288-8d68-6d12756368b5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/913eeba3-a280-4ffa-a61a-febff59fcc2e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107214 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107432 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dznvk\" (UniqueName: \"kubernetes.io/projected/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-kube-api-access-dznvk\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107518 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7klm\" (UniqueName: \"kubernetes.io/projected/8c51e400-95dc-4b1b-ab28-e3f2e5780758-kube-api-access-s7klm\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klj4x\" (UniqueName: \"kubernetes.io/projected/a1a696f2-274f-4b1c-9212-fc280920f69f-kube-api-access-klj4x\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-node-bootstrap-token\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2smdw\" (UniqueName: \"kubernetes.io/projected/2eb41230-c219-4968-a240-36db37f3d772-kube-api-access-2smdw\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-mountpoint-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-plugins-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.107903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-cabundle\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.108156 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-proxy-tls\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.108235 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-csi-data-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.108304 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-apiservice-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.108333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913eeba3-a280-4ffa-a61a-febff59fcc2e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.108401 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a696f2-274f-4b1c-9212-fc280920f69f-metrics-tls\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.108595 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.608580185 +0000 UTC m=+195.449202811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.113533 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.114465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.116879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.133780 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.145146 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.152788 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.160991 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.175492 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.176876 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.200052 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.205048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" event={"ID":"90b00b61-4e40-4e08-b164-643608e91dd0","Type":"ContainerStarted","Data":"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.205150 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" event={"ID":"90b00b61-4e40-4e08-b164-643608e91dd0","Type":"ContainerStarted","Data":"4246803e0ef4ed600ec0927d6385f1e8e217847eef221a71eb58cb0a20a28737"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.208551 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.208675 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9qkgc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.208712 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.209723 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzh8s\" (UniqueName: \"kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s\") pod \"marketplace-operator-79b997595-gwwg5\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.209882 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.210018 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.709993968 +0000 UTC m=+195.550616594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210232 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7klm\" (UniqueName: \"kubernetes.io/projected/8c51e400-95dc-4b1b-ab28-e3f2e5780758-kube-api-access-s7klm\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klj4x\" (UniqueName: \"kubernetes.io/projected/a1a696f2-274f-4b1c-9212-fc280920f69f-kube-api-access-klj4x\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-node-bootstrap-token\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-mountpoint-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210398 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-plugins-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-csi-data-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210466 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a696f2-274f-4b1c-9212-fc280920f69f-metrics-tls\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-socket-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-certs\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wsbn\" (UniqueName: \"kubernetes.io/projected/9546ad13-1c91-495a-865b-b3396a94e17e-kube-api-access-6wsbn\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210653 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-registration-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210687 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1a696f2-274f-4b1c-9212-fc280920f69f-config-volume\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210814 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d08723-fac8-48ca-9255-848a0e659721-cert\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bjn\" (UniqueName: \"kubernetes.io/projected/87d08723-fac8-48ca-9255-848a0e659721-kube-api-access-86bjn\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210854 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-plugins-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-mountpoint-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.210973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-socket-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.211166 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-csi-data-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.211415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c51e400-95dc-4b1b-ab28-e3f2e5780758-registration-dir\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.212507 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1a696f2-274f-4b1c-9212-fc280920f69f-config-volume\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.214268 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.714182652 +0000 UTC m=+195.554805488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.215527 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.215570 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a1a696f2-274f-4b1c-9212-fc280920f69f-metrics-tls\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.217575 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" event={"ID":"0d10e537-edf1-40b9-a8a7-038237e48834","Type":"ContainerStarted","Data":"5ce3f5d4bde270d920e3cddbc57fd997c4571efd4f80eea42ebd6e80185bb58b"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.219719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" event={"ID":"2715796f-e4b0-4400-a02c-a485171a9858","Type":"ContainerStarted","Data":"ae9531a5443453613e03699e5b9dba29ce95a533726645ee1b9e2b5c8275b0f3"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.224607 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4t8\" (UniqueName: \"kubernetes.io/projected/e2ccfb3a-48b6-4367-abae-d5ac6d053f77-kube-api-access-9l4t8\") pod \"migrator-59844c95c7-jfgsj\" (UID: \"e2ccfb3a-48b6-4367-abae-d5ac6d053f77\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" Nov 22 07:13:16 crc kubenswrapper[4853]: W1122 07:13:16.228459 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode118bf40_4574_410f_bb2f_b5eb601974e5.slice/crio-603c58d815c2a6177fc439c9063f4b94953ce053c6716892f3e90c89793d01f2 WatchSource:0}: Error finding container 603c58d815c2a6177fc439c9063f4b94953ce053c6716892f3e90c89793d01f2: Status 404 returned error can't find the container with id 603c58d815c2a6177fc439c9063f4b94953ce053c6716892f3e90c89793d01f2 Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.232430 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.235284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.250207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" event={"ID":"90eeaa0a-6939-40a5-821c-82579c812f3b","Type":"ContainerStarted","Data":"4a30aa7422650b12e1713505728966aa69dfee51b909229268d633de9a94f674"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.252480 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.253229 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpb7j" event={"ID":"bcd72804-cd09-4ec3-ae4a-f539958eb90c","Type":"ContainerStarted","Data":"8e6bc96771059e2c8f799b9c53e0dfa76ea9448710ffb5d921e1b8ba117e3e49"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.253935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-config\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.278256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" event={"ID":"4df1a0b5-a039-4098-a88e-96015dcf1406","Type":"ContainerStarted","Data":"5b081c08e8c14b429c145e55b77f9420e79f1f51b62a8e6b8aa2a3b9d6b8c229"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.278617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" event={"ID":"4df1a0b5-a039-4098-a88e-96015dcf1406","Type":"ContainerStarted","Data":"90fe3724b60c0a870d312db7512bb36ce49acfdfee87d0d50881802039b0da9b"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.285539 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" event={"ID":"341b4f0c-09ee-4297-99c4-b8e6334de4ed","Type":"ContainerStarted","Data":"8ecdd67f9d0e2d1603d9b968b08f8ceeb83f858e1abf66fbdd0c6f3940017d0b"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.285606 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" event={"ID":"341b4f0c-09ee-4297-99c4-b8e6334de4ed","Type":"ContainerStarted","Data":"09c3d9cc032b56f5c0e0f1e89df933f41b0930a66aba736123a5a5cc915ef396"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.288288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" event={"ID":"05b7fb71-56a6-4875-a680-995a1a2194d6","Type":"ContainerStarted","Data":"7416011503e97e89b69c81ff96b6f4c8af7512676a3abc1509227b2194c8ff3f"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.299850 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.308387 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-psplq" event={"ID":"f5444051-a1d3-4854-8b30-367e3fd2c123","Type":"ContainerStarted","Data":"74557f6729f254b5a50ca88b1ab459fffebe63e141c15a931e2c17c70970eab7"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.308436 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-psplq" event={"ID":"f5444051-a1d3-4854-8b30-367e3fd2c123","Type":"ContainerStarted","Data":"2cb80d999e3d6aed561f0fedcb8da45c0f4b8d545c7863887d320732a4105e97"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.309661 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.311542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-webhook-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.311871 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.312032 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.812009228 +0000 UTC m=+195.652631854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.312667 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.314251 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.315309 4853 patch_prober.go:28] interesting pod/console-operator-58897d9998-psplq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.315365 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-psplq" podUID="f5444051-a1d3-4854-8b30-367e3fd2c123" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.315386 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/472b3cc8-386e-4828-a725-263057fb299b-apiservice-cert\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.317652 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.817630031 +0000 UTC m=+195.658252657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.320279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-metrics-certs\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.325982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5nds5" event={"ID":"6d3c61d5-518d-443e-beb3-a0bf27a07be4","Type":"ContainerStarted","Data":"b17a7802bc0213ba96a5bca0eb6b8a0c92a507db5071f8650d60e7b03c987d3a"} Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.353256 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.375090 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.376194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-service-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.376197 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.398199 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-serving-cert\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.400895 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.405671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.432354 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.442818 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.445250 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:16.945203794 +0000 UTC m=+195.785826420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.445542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj5nl\" (UniqueName: \"kubernetes.io/projected/f5873a23-2127-4288-8d68-6d12756368b5-kube-api-access-bj5nl\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.446261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-config\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.452163 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.457051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913eeba3-a280-4ffa-a61a-febff59fcc2e-config\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.459301 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.470878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.484198 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.486568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c313448-9287-4014-b36e-ae4e14b9ee4e-service-ca-bundle\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.490116 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69"] Nov 22 07:13:16 crc kubenswrapper[4853]: W1122 07:13:16.512764 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4be3f473_ecf9_464d_b363_f28c82456652.slice/crio-bd09b87ccaf43f6ae7b5f6cf04150967e25b47c4a8a7676da84bef455ac37aec WatchSource:0}: Error finding container bd09b87ccaf43f6ae7b5f6cf04150967e25b47c4a8a7676da84bef455ac37aec: Status 404 returned error can't find the container with id bd09b87ccaf43f6ae7b5f6cf04150967e25b47c4a8a7676da84bef455ac37aec Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.515697 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.538457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-serving-cert\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.540597 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.541686 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.041631373 +0000 UTC m=+195.882254009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.557582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl227\" (UniqueName: \"kubernetes.io/projected/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-kube-api-access-fl227\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.560291 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rh6fb"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.562482 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.574737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/2eb41230-c219-4968-a240-36db37f3d772-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: W1122 07:13:16.586332 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod065d5bdc_7e13_4a03_aa2d_5b7dd3b3938c.slice/crio-c39d03a9a6a6fbd4da0633c707bc60ff605ff5d6437678af159b81b861f8dbc3 WatchSource:0}: Error finding container c39d03a9a6a6fbd4da0633c707bc60ff605ff5d6437678af159b81b861f8dbc3: Status 404 returned error can't find the container with id c39d03a9a6a6fbd4da0633c707bc60ff605ff5d6437678af159b81b861f8dbc3 Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.621441 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2rbq\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.632212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.643199 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.643432 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.143354965 +0000 UTC m=+195.983977591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.644040 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.644451 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.144435954 +0000 UTC m=+195.985058580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.654372 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.654875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhvjc\" (UniqueName: \"kubernetes.io/projected/472b3cc8-386e-4828-a725-263057fb299b-kube-api-access-xhvjc\") pod \"packageserver-d55dfcdfc-7jnds\" (UID: \"472b3cc8-386e-4828-a725-263057fb299b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.669599 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-stats-auth\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.672141 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.677046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-ca\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.678418 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb"] Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.711803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg24r\" (UniqueName: \"kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r\") pod \"collect-profiles-29396580-xlrxm\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.712999 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.717403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-images\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.734181 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.743572 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.745588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.746271 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.246240218 +0000 UTC m=+196.086862844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.753041 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.763290 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-key\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.774810 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.783592 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-etcd-client\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.796029 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.802224 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6c313448-9287-4014-b36e-ae4e14b9ee4e-default-certificate\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.812506 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.827900 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f5873a23-2127-4288-8d68-6d12756368b5-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-h486l\" (UID: \"f5873a23-2127-4288-8d68-6d12756368b5\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.850311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.856557 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.356523702 +0000 UTC m=+196.197146339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.890562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2smdw\" (UniqueName: \"kubernetes.io/projected/2eb41230-c219-4968-a240-36db37f3d772-kube-api-access-2smdw\") pod \"control-plane-machine-set-operator-78cbb6b69f-hktm5\" (UID: \"2eb41230-c219-4968-a240-36db37f3d772\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.894850 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.900591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7e73b9e6-c1a8-411b-9360-32dc388d76f1-signing-cabundle\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.916616 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.924844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23e292bd-15e6-4fc0-835e-2871bc0e9e8e-proxy-tls\") pod \"machine-config-operator-74547568cd-22nw6\" (UID: \"23e292bd-15e6-4fc0-835e-2871bc0e9e8e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.934776 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.942490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/913eeba3-a280-4ffa-a61a-febff59fcc2e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.951009 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.951183 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.951463 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.451425049 +0000 UTC m=+196.292047675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.951910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.952419 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.952541 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.452530799 +0000 UTC m=+196.293153425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:16 crc kubenswrapper[4853]: I1122 07:13:16.952643 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" Nov 22 07:13:16 crc kubenswrapper[4853]: E1122 07:13:16.979018 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d10e537_edf1_40b9_a8a7_038237e48834.slice/crio-conmon-e92423b75fb33d22c9cf18a7ba0ae4ab76860d76efef6ab1ad11f4cd18ff353c.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.018993 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.022599 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klj4x\" (UniqueName: \"kubernetes.io/projected/a1a696f2-274f-4b1c-9212-fc280920f69f-kube-api-access-klj4x\") pod \"dns-default-99wl6\" (UID: \"a1a696f2-274f-4b1c-9212-fc280920f69f\") " pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.025156 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt"] Nov 22 07:13:17 crc kubenswrapper[4853]: W1122 07:13:17.027952 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42e0f31e_1622_4388_9852_f22966d156f4.slice/crio-51fe1426fc641bdfe831ba9ec5a48c0880e7fc654d8d47f04c2f399b59f33b22 WatchSource:0}: Error finding container 51fe1426fc641bdfe831ba9ec5a48c0880e7fc654d8d47f04c2f399b59f33b22: Status 404 returned error can't find the container with id 51fe1426fc641bdfe831ba9ec5a48c0880e7fc654d8d47f04c2f399b59f33b22 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.030316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-node-bootstrap-token\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.038280 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.044490 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.049339 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zqjqq"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.053684 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.054353 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.056603 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.556572274 +0000 UTC m=+196.397194920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.062393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9546ad13-1c91-495a-865b-b3396a94e17e-certs\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.067607 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7klm\" (UniqueName: \"kubernetes.io/projected/8c51e400-95dc-4b1b-ab28-e3f2e5780758-kube-api-access-s7klm\") pod \"csi-hostpathplugin-w6jpc\" (UID: \"8c51e400-95dc-4b1b-ab28-e3f2e5780758\") " pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:17 crc kubenswrapper[4853]: W1122 07:13:17.077002 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63e8cbe0_5a31_49f6_bd66_f04a2eb641ec.slice/crio-b748201fe9b9916cfeba73f53b5cd6cc38595d136562f50fb376c16a85f649d5 WatchSource:0}: Error finding container b748201fe9b9916cfeba73f53b5cd6cc38595d136562f50fb376c16a85f649d5: Status 404 returned error can't find the container with id b748201fe9b9916cfeba73f53b5cd6cc38595d136562f50fb376c16a85f649d5 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.100767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wsbn\" (UniqueName: \"kubernetes.io/projected/9546ad13-1c91-495a-865b-b3396a94e17e-kube-api-access-6wsbn\") pod \"machine-config-server-bnk7x\" (UID: \"9546ad13-1c91-495a-865b-b3396a94e17e\") " pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.102670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.113351 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.124992 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.138791 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.140905 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.151360 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/87d08723-fac8-48ca-9255-848a0e659721-cert\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.152437 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.159935 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.659920159 +0000 UTC m=+196.500542785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.159574 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.167676 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.185724 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.191851 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.195717 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-696ts"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.196396 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.197380 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.198276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.212998 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: W1122 07:13:17.229648 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83b3203a_f1e8_4d8e_8c42_4932026537ee.slice/crio-2e2adc05e6e61cac3d8b0f3da6b4d67f3df5e425260d18da73867447899f8df9 WatchSource:0}: Error finding container 2e2adc05e6e61cac3d8b0f3da6b4d67f3df5e425260d18da73867447899f8df9: Status 404 returned error can't find the container with id 2e2adc05e6e61cac3d8b0f3da6b4d67f3df5e425260d18da73867447899f8df9 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.234379 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.240030 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.250424 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.252273 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.261985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/913eeba3-a280-4ffa-a61a-febff59fcc2e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t2flg\" (UID: \"913eeba3-a280-4ffa-a61a-febff59fcc2e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.274078 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.282051 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.282497 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.782474958 +0000 UTC m=+196.623097584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.298654 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.300657 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.300939 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.312725 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 07:13:17 crc kubenswrapper[4853]: W1122 07:13:17.313933 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606305dc_db05_45e6_8409_fdb1ca8ca988.slice/crio-2c8a607544e712d537ee4a301826ae557b9646623c1f15ead2e27347ae3ed63b WatchSource:0}: Error finding container 2c8a607544e712d537ee4a301826ae557b9646623c1f15ead2e27347ae3ed63b: Status 404 returned error can't find the container with id 2c8a607544e712d537ee4a301826ae557b9646623c1f15ead2e27347ae3ed63b Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.317200 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.353210 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.353601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" event={"ID":"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec","Type":"ContainerStarted","Data":"b748201fe9b9916cfeba73f53b5cd6cc38595d136562f50fb376c16a85f649d5"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.357289 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.366051 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" event={"ID":"5aa7d496-b98f-4b8f-8974-1bd30f617280","Type":"ContainerStarted","Data":"e04efb44aaf04a5355267da42910e189180785d2ad5d002ff023fa11096df91f"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.366101 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" event={"ID":"5aa7d496-b98f-4b8f-8974-1bd30f617280","Type":"ContainerStarted","Data":"ab7e9f4bae50b0c548676075ffbb3693d8d3ed267f32debad7c0ed7769a4d828"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.366831 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.373583 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.374659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" event={"ID":"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04","Type":"ContainerStarted","Data":"d1a570d7d0fcbd082657ae94277267c71c703e315462d5203da50e815875681f"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.387527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.388255 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.888207999 +0000 UTC m=+196.728830615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.387999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" event={"ID":"0d10e537-edf1-40b9-a8a7-038237e48834","Type":"ContainerDied","Data":"e92423b75fb33d22c9cf18a7ba0ae4ab76860d76efef6ab1ad11f4cd18ff353c"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.387971 4853 generic.go:334] "Generic (PLEG): container finished" podID="0d10e537-edf1-40b9-a8a7-038237e48834" containerID="e92423b75fb33d22c9cf18a7ba0ae4ab76860d76efef6ab1ad11f4cd18ff353c" exitCode=0 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.391056 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-99wl6"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.393846 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.404142 4853 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-flwkb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.404222 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" podUID="5aa7d496-b98f-4b8f-8974-1bd30f617280" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.408883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" event={"ID":"2715796f-e4b0-4400-a02c-a485171a9858","Type":"ContainerStarted","Data":"b1fcca6c228eaff3d4ad42fc3831374946c163be27f3ba8848a7ca59cc848ec5"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.413198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzjf\" (UniqueName: \"kubernetes.io/projected/7e73b9e6-c1a8-411b-9360-32dc388d76f1-kube-api-access-ctzjf\") pod \"service-ca-9c57cc56f-n6vz6\" (UID: \"7e73b9e6-c1a8-411b-9360-32dc388d76f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.414650 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.421878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k86b\" (UniqueName: \"kubernetes.io/projected/29a2c0ca-4d4d-4a05-a4bb-96b05720f59f-kube-api-access-6k86b\") pod \"etcd-operator-b45778765-9kg95\" (UID: \"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.426401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" event={"ID":"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c","Type":"ContainerStarted","Data":"5bc58236b1ede6d73c67192be04fe3c145c99d6bdd183782a145c32364a3ad35"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.426444 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" event={"ID":"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c","Type":"ContainerStarted","Data":"c39d03a9a6a6fbd4da0633c707bc60ff605ff5d6437678af159b81b861f8dbc3"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.431921 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpb7j" event={"ID":"bcd72804-cd09-4ec3-ae4a-f539958eb90c","Type":"ContainerStarted","Data":"ff27af9724977c26dd4189ff352ef6774985b9d6fcbf927e3fc282d59dafe23b"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.433115 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.433372 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.442623 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" event={"ID":"606305dc-db05-45e6-8409-fdb1ca8ca988","Type":"ContainerStarted","Data":"2c8a607544e712d537ee4a301826ae557b9646623c1f15ead2e27347ae3ed63b"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.445904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfmt7\" (UniqueName: \"kubernetes.io/projected/6c313448-9287-4014-b36e-ae4e14b9ee4e-kube-api-access-gfmt7\") pod \"router-default-5444994796-h2jnh\" (UID: \"6c313448-9287-4014-b36e-ae4e14b9ee4e\") " pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.451548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" event={"ID":"4be3f473-ecf9-464d-b363-f28c82456652","Type":"ContainerStarted","Data":"bd431a1c328486e05b3c8473f7d35a07de3d1c633f10367996337e33fd72dd87"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.451607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" event={"ID":"4be3f473-ecf9-464d-b363-f28c82456652","Type":"ContainerStarted","Data":"bd09b87ccaf43f6ae7b5f6cf04150967e25b47c4a8a7676da84bef455ac37aec"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.453592 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.456494 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.456588 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.456825 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" event={"ID":"e118bf40-4574-410f-bb2f-b5eb601974e5","Type":"ContainerStarted","Data":"6faa10402e08a67a8eab847ba520b1d813335e1be9307d6dfb3d8ee1f96fc97a"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.456869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" event={"ID":"e118bf40-4574-410f-bb2f-b5eb601974e5","Type":"ContainerStarted","Data":"603c58d815c2a6177fc439c9063f4b94953ce053c6716892f3e90c89793d01f2"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.465438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5nds5" event={"ID":"6d3c61d5-518d-443e-beb3-a0bf27a07be4","Type":"ContainerStarted","Data":"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.465664 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dznvk\" (UniqueName: \"kubernetes.io/projected/9a046e1a-9a2f-472a-909e-12fdaa9db2f1-kube-api-access-dznvk\") pod \"service-ca-operator-777779d784-cjq85\" (UID: \"9a046e1a-9a2f-472a-909e-12fdaa9db2f1\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.476089 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.486187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.490823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" event={"ID":"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7","Type":"ContainerStarted","Data":"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.490888 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" event={"ID":"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7","Type":"ContainerStarted","Data":"0904dec92d1299cc3aef2e71988c4159b63191d58c04c0a4c4636177c6081a86"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.492234 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.493801 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.495534 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:17.995512312 +0000 UTC m=+196.836134938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.498632 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" event={"ID":"83b3203a-f1e8-4d8e-8c42-4932026537ee","Type":"ContainerStarted","Data":"2e2adc05e6e61cac3d8b0f3da6b4d67f3df5e425260d18da73867447899f8df9"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.504988 4853 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9qfvq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.39:6443/healthz\": dial tcp 10.217.0.39:6443: connect: connection refused" start-of-body= Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.505052 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.39:6443/healthz\": dial tcp 10.217.0.39:6443: connect: connection refused" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.514072 4853 generic.go:334] "Generic (PLEG): container finished" podID="05b7fb71-56a6-4875-a680-995a1a2194d6" containerID="383d5171fd5e31ea022d3c9e7bb7d3d1e63d4f6be8f587d18ac88129cbd56143" exitCode=0 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.514262 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" event={"ID":"05b7fb71-56a6-4875-a680-995a1a2194d6","Type":"ContainerDied","Data":"383d5171fd5e31ea022d3c9e7bb7d3d1e63d4f6be8f587d18ac88129cbd56143"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.517791 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.523798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" event={"ID":"2454431f-55ed-4abb-b70f-9382007e9026","Type":"ContainerStarted","Data":"a06145cd36c7dba6839df9ffaaff00c0301adaaa9ee11ec7775492696c26f136"} Nov 22 07:13:17 crc kubenswrapper[4853]: W1122 07:13:17.524916 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a696f2_274f_4b1c_9212_fc280920f69f.slice/crio-6a85cb9be89d91d74f964297ce612038dc95f588b22b4972c1ca87647365f849 WatchSource:0}: Error finding container 6a85cb9be89d91d74f964297ce612038dc95f588b22b4972c1ca87647365f849: Status 404 returned error can't find the container with id 6a85cb9be89d91d74f964297ce612038dc95f588b22b4972c1ca87647365f849 Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.536796 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.537625 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bjn\" (UniqueName: \"kubernetes.io/projected/87d08723-fac8-48ca-9255-848a0e659721-kube-api-access-86bjn\") pod \"ingress-canary-t9fmp\" (UID: \"87d08723-fac8-48ca-9255-848a0e659721\") " pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.538424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" event={"ID":"341b4f0c-09ee-4297-99c4-b8e6334de4ed","Type":"ContainerStarted","Data":"9bfa38c03fb659c0e4c62a0fee42e2f0216c3b4e7e0d7750f6a26be5367ecb71"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.549316 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bnk7x" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.552201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" event={"ID":"3960cd0a-8f4a-44de-a022-3858e1176a99","Type":"ContainerStarted","Data":"2265c5719f9f986b5ab5991fbbd17424af09df28a98655b7f80513d69127ac31"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.575989 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.576429 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" event={"ID":"ad5235a6-36eb-42fc-8a56-d8464014b881","Type":"ContainerStarted","Data":"db9cec1cb62daa17f18bfd8fb546a3e3752f465bcc22aa2e2f8c9f92a4701383"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.576474 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" event={"ID":"ad5235a6-36eb-42fc-8a56-d8464014b881","Type":"ContainerStarted","Data":"a4fb7d045affe7c2942f3aadd705f000dfe86a88d3222ffc65a821ba995fcd96"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.578440 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-w6jpc"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.584359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.605655 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.612606 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.112589731 +0000 UTC m=+196.953212357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.613474 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.622020 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.654870 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.659618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.667003 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.673431 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.679665 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" event={"ID":"42e0f31e-1622-4388-9852-f22966d156f4","Type":"ContainerStarted","Data":"51fe1426fc641bdfe831ba9ec5a48c0880e7fc654d8d47f04c2f399b59f33b22"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.680898 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.689450 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" event={"ID":"f75d360f-0e31-40e3-8b5d-d51934525efb","Type":"ContainerStarted","Data":"60f043fe1e18ed3b43cf9807f1840dfbbcab71bf4949a6c02094b425ac661786"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.703403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" event={"ID":"90eeaa0a-6939-40a5-821c-82579c812f3b","Type":"ContainerStarted","Data":"7c0aeffd6262219d5f14267be2e7a31a33d67e67ef534019a9ccd59a409653b1"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.712510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.713723 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.213550672 +0000 UTC m=+197.054173298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.718693 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.724778 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-t9fmp" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.746078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" event={"ID":"52bdc241-d70a-4a84-adc2-618dc90b8886","Type":"ContainerStarted","Data":"efcafd306c6408c8b155b85219eca28621f51227764a91dbb8becd886bb5fd67"} Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.763446 4853 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9qkgc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.763546 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.848486 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.852891 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.856694 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.356671368 +0000 UTC m=+197.197293994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.959576 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-h486l"] Nov 22 07:13:17 crc kubenswrapper[4853]: I1122 07:13:17.963272 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:17 crc kubenswrapper[4853]: E1122 07:13:17.963703 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.463683184 +0000 UTC m=+197.304305810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.068729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.069092 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.569075725 +0000 UTC m=+197.409698351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.108558 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.141357 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.157543 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" podStartSLOduration=138.157520616 podStartE2EDuration="2m18.157520616s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.14809621 +0000 UTC m=+196.988718846" watchObservedRunningTime="2025-11-22 07:13:18.157520616 +0000 UTC m=+196.998143242" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.182861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.183287 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.683264265 +0000 UTC m=+197.523886891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: W1122 07:13:18.215401 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb41230_c219_4968_a240_36db37f3d772.slice/crio-9c8d70f4e049ac2fd70b9337599a7751bb912316498dddbef43544b865d469ab WatchSource:0}: Error finding container 9c8d70f4e049ac2fd70b9337599a7751bb912316498dddbef43544b865d469ab: Status 404 returned error can't find the container with id 9c8d70f4e049ac2fd70b9337599a7751bb912316498dddbef43544b865d469ab Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.231469 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" podStartSLOduration=139.231439614 podStartE2EDuration="2m19.231439614s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.229339926 +0000 UTC m=+197.069962552" watchObservedRunningTime="2025-11-22 07:13:18.231439614 +0000 UTC m=+197.072062240" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.240550 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cjq85"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.287186 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.288725 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.788706559 +0000 UTC m=+197.629329185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.298512 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5pn5x" podStartSLOduration=138.298478254 podStartE2EDuration="2m18.298478254s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.284280538 +0000 UTC m=+197.124903184" watchObservedRunningTime="2025-11-22 07:13:18.298478254 +0000 UTC m=+197.139100900" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.309657 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-pbzlk" podStartSLOduration=139.309627447 podStartE2EDuration="2m19.309627447s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.308548567 +0000 UTC m=+197.149171203" watchObservedRunningTime="2025-11-22 07:13:18.309627447 +0000 UTC m=+197.150250073" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.382282 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wzbj5" podStartSLOduration=138.382259728 podStartE2EDuration="2m18.382259728s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.349424137 +0000 UTC m=+197.190046783" watchObservedRunningTime="2025-11-22 07:13:18.382259728 +0000 UTC m=+197.222882354" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.389817 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.390415 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.890398989 +0000 UTC m=+197.731021615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.398391 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-psplq" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.482800 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.492830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.498086 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:18.998052812 +0000 UTC m=+197.838675438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.519579 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kc8zd" podStartSLOduration=139.519554367 podStartE2EDuration="2m19.519554367s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.491216266 +0000 UTC m=+197.331838922" watchObservedRunningTime="2025-11-22 07:13:18.519554367 +0000 UTC m=+197.360177003" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.536674 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.559317 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" podStartSLOduration=138.559296455 podStartE2EDuration="2m18.559296455s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.558273497 +0000 UTC m=+197.398896123" watchObservedRunningTime="2025-11-22 07:13:18.559296455 +0000 UTC m=+197.399919081" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.596338 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.596821 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.096796374 +0000 UTC m=+197.937419000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.697112 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" podStartSLOduration=138.697089106 podStartE2EDuration="2m18.697089106s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.687690812 +0000 UTC m=+197.528313448" watchObservedRunningTime="2025-11-22 07:13:18.697089106 +0000 UTC m=+197.537711732" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.698666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.699156 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.199136492 +0000 UTC m=+198.039759118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.794589 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" podStartSLOduration=138.794566633 podStartE2EDuration="2m18.794566633s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.79333633 +0000 UTC m=+197.633958976" watchObservedRunningTime="2025-11-22 07:13:18.794566633 +0000 UTC m=+197.635189259" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.807511 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.807948 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.307926205 +0000 UTC m=+198.148548831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.818571 4853 generic.go:334] "Generic (PLEG): container finished" podID="2454431f-55ed-4abb-b70f-9382007e9026" containerID="eae26266594890a52f3fa9c873d0633c9ca6ac5dec2f1e60a7056ae59fd858e1" exitCode=0 Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.818719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" event={"ID":"2454431f-55ed-4abb-b70f-9382007e9026","Type":"ContainerDied","Data":"eae26266594890a52f3fa9c873d0633c9ca6ac5dec2f1e60a7056ae59fd858e1"} Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.822477 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" event={"ID":"f5873a23-2127-4288-8d68-6d12756368b5","Type":"ContainerStarted","Data":"e02f986b0f1ba579dfedc3222b5b51bbc4c6372b2a130ea02f5300825ffded6b"} Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.849518 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" event={"ID":"3960cd0a-8f4a-44de-a022-3858e1176a99","Type":"ContainerStarted","Data":"a124f5c88a70e0708695b35747096a98feceb16692e796f9e6ebbd1ebf08441e"} Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.850367 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-psplq" podStartSLOduration=138.850337368 podStartE2EDuration="2m18.850337368s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.85006214 +0000 UTC m=+197.690684766" watchObservedRunningTime="2025-11-22 07:13:18.850337368 +0000 UTC m=+197.690959994" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.881443 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-9kg95"] Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.901311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" event={"ID":"e2ccfb3a-48b6-4367-abae-d5ac6d053f77","Type":"ContainerStarted","Data":"5174af98b56ea41c2bc4ad40318b63cd9f9ca38b966469f0068f1b362da0663f"} Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.903131 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" podStartSLOduration=139.903097649 podStartE2EDuration="2m19.903097649s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:18.902596006 +0000 UTC m=+197.743218652" watchObservedRunningTime="2025-11-22 07:13:18.903097649 +0000 UTC m=+197.743720275" Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.908928 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:18 crc kubenswrapper[4853]: E1122 07:13:18.909441 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.409420142 +0000 UTC m=+198.250042768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:18 crc kubenswrapper[4853]: I1122 07:13:18.922216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-99wl6" event={"ID":"a1a696f2-274f-4b1c-9212-fc280920f69f","Type":"ContainerStarted","Data":"6a85cb9be89d91d74f964297ce612038dc95f588b22b4972c1ca87647365f849"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.005611 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-cfkqt" event={"ID":"42e0f31e-1622-4388-9852-f22966d156f4","Type":"ContainerStarted","Data":"1ac03ed228038feeb6599a34c6eca0d3f26723d22976968c9137981bd43c6369"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.032920 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.033920 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.533897671 +0000 UTC m=+198.374520297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.068635 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" event={"ID":"9a046e1a-9a2f-472a-909e-12fdaa9db2f1","Type":"ContainerStarted","Data":"08e22264b5a3c03755d92ceaa44ec779d2cd1c4d6682a8566eb3bbbb55a6286b"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.093501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" event={"ID":"472b3cc8-386e-4828-a725-263057fb299b","Type":"ContainerStarted","Data":"7c9ab2bf1c03c982f7253317074e909c32e837ba3c157255cb73c00738ff2efd"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.093556 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" event={"ID":"472b3cc8-386e-4828-a725-263057fb299b","Type":"ContainerStarted","Data":"2dbc0c9121552ec2a7e7096873586dc618f3706661ec4378ec0fe5288c79c5a7"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.095468 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.117385 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7jnds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.117473 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podUID="472b3cc8-386e-4828-a725-263057fb299b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.125490 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-n6vz6"] Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.134866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.135449 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.635426638 +0000 UTC m=+198.476049264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.138435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" event={"ID":"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec","Type":"ContainerStarted","Data":"c16c07047449d0472ed9787146309b449ab016b91eb4b6d0cf3b66ebf5a988cb"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.138520 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" event={"ID":"63e8cbe0-5a31-49f6-bd66-f04a2eb641ec","Type":"ContainerStarted","Data":"bee310091da46b346fde0e17b10007da91026c6445def3671791a3bdff045257"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.139775 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:19 crc kubenswrapper[4853]: W1122 07:13:19.207680 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29a2c0ca_4d4d_4a05_a4bb_96b05720f59f.slice/crio-78757a05a952ce39037468a3f750c3b7bfbeb26a836223921f60066b0d935f00 WatchSource:0}: Error finding container 78757a05a952ce39037468a3f750c3b7bfbeb26a836223921f60066b0d935f00: Status 404 returned error can't find the container with id 78757a05a952ce39037468a3f750c3b7bfbeb26a836223921f60066b0d935f00 Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.235607 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.236443 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.73641038 +0000 UTC m=+198.577033006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.245347 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-cll6l" event={"ID":"ad5235a6-36eb-42fc-8a56-d8464014b881","Type":"ContainerStarted","Data":"f7eea2526329848739d271b0def9ed51bcec7ea8745eeae5bfad5a490d690b9d"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.263577 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h2jnh" event={"ID":"6c313448-9287-4014-b36e-ae4e14b9ee4e","Type":"ContainerStarted","Data":"3579c0d7eab4e5e3b502f22b6d921eadf6ac3979cd66e021eedec5c94bc19dc9"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.288999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" event={"ID":"f75d360f-0e31-40e3-8b5d-d51934525efb","Type":"ContainerStarted","Data":"6c97df635e0913d204dd18a264a94db135bc941a49938cb2c94bd70c46dee596"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.313013 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-5nds5" podStartSLOduration=139.312977849 podStartE2EDuration="2m19.312977849s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:19.273056915 +0000 UTC m=+198.113679541" watchObservedRunningTime="2025-11-22 07:13:19.312977849 +0000 UTC m=+198.153600475" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.327437 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" event={"ID":"83b3203a-f1e8-4d8e-8c42-4932026537ee","Type":"ContainerStarted","Data":"aba325f407b57ac5d042e2ffcc583e8b044cc2f69a00bbb2ff33118b5ceabfbd"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.337175 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.353411 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.853367725 +0000 UTC m=+198.693990351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.368361 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-t9fmp"] Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.380310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" event={"ID":"065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c","Type":"ContainerStarted","Data":"91179825170a3b05f3fa55cb706be58a3bae748a930d8fd0f715a91cf0bc0410"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.387185 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" event={"ID":"adaf4de5-0b3c-4b48-a232-45157864a0f7","Type":"ContainerStarted","Data":"089d9f12a515b48b6322b0f2126e0bd80c9b9351704fa04e4630ac77b518d905"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.403646 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" event={"ID":"913eeba3-a280-4ffa-a61a-febff59fcc2e","Type":"ContainerStarted","Data":"6a5aa50ab68f0a63199c08c7c1fc7c7d1a15b9d4f513e88a3b07595350f488c2"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.446548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" event={"ID":"8c51e400-95dc-4b1b-ab28-e3f2e5780758","Type":"ContainerStarted","Data":"2941b7f2a03097c5bc187eb2093160bd7e2f8865cb820266c7657b80fc09c37e"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.462343 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.463837 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:19.963805394 +0000 UTC m=+198.804428060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.464144 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bnk7x" event={"ID":"9546ad13-1c91-495a-865b-b3396a94e17e","Type":"ContainerStarted","Data":"5940406ae496fac7ee1db3d02ec87fa0ff6fa3a9212d3b4100455c1c636e7f8a"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.559577 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" event={"ID":"606305dc-db05-45e6-8409-fdb1ca8ca988","Type":"ContainerStarted","Data":"b4cea9c36217941b2df2530044729b0bb99eed2e08450ed992380da4265c1432"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.578263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.579025 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.079012522 +0000 UTC m=+198.919635148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.580664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" event={"ID":"2eb41230-c219-4968-a240-36db37f3d772","Type":"ContainerStarted","Data":"9c8d70f4e049ac2fd70b9337599a7751bb912316498dddbef43544b865d469ab"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.620183 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" event={"ID":"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04","Type":"ContainerStarted","Data":"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.621310 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.627619 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" event={"ID":"52bdc241-d70a-4a84-adc2-618dc90b8886","Type":"ContainerStarted","Data":"e8ec06dc665d3880a36890dfaaf68c03bff5adaa88f66cd8601db812a45166fc"} Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.634914 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.634975 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.642616 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.649847 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-flwkb" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.682829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.683762 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.183705435 +0000 UTC m=+199.024328061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.790481 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.791103 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.291088141 +0000 UTC m=+199.131710767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.833675 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-zxl69" podStartSLOduration=139.833646646 podStartE2EDuration="2m19.833646646s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:19.822973715 +0000 UTC m=+198.663596341" watchObservedRunningTime="2025-11-22 07:13:19.833646646 +0000 UTC m=+198.674269272" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.900258 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:19 crc kubenswrapper[4853]: E1122 07:13:19.900785 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.400723927 +0000 UTC m=+199.241346553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.948151 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.964112 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:13:19 crc kubenswrapper[4853]: I1122 07:13:19.965104 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.015389 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.019585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.020143 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.520116948 +0000 UTC m=+199.360739574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.112536 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hpb7j" podStartSLOduration=140.112504667 podStartE2EDuration="2m20.112504667s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.006996703 +0000 UTC m=+198.847619329" watchObservedRunningTime="2025-11-22 07:13:20.112504667 +0000 UTC m=+198.953127293" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.147966 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.154417 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.154733 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.154812 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hctnl\" (UniqueName: \"kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.154854 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.154976 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.65495916 +0000 UTC m=+199.495581776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.191710 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.193400 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.210944 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.221615 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rv68m" podStartSLOduration=140.221587979 podStartE2EDuration="2m20.221587979s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.21132528 +0000 UTC m=+199.051947906" watchObservedRunningTime="2025-11-22 07:13:20.221587979 +0000 UTC m=+199.062210625" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.246663 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262102 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262223 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hctnl\" (UniqueName: \"kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262310 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262335 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lg2h\" (UniqueName: \"kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.262377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.262891 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.76287206 +0000 UTC m=+199.603494676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.263032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.263317 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.314677 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podStartSLOduration=140.314644866 podStartE2EDuration="2m20.314644866s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.304833329 +0000 UTC m=+199.145455975" watchObservedRunningTime="2025-11-22 07:13:20.314644866 +0000 UTC m=+199.155267492" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.362845 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hctnl\" (UniqueName: \"kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl\") pod \"community-operators-sk8bz\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.363200 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.863178013 +0000 UTC m=+199.703800629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.363119 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.363708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.363905 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.367130 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lg2h\" (UniqueName: \"kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.367206 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.367601 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.367627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.368403 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.868390835 +0000 UTC m=+199.709013461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.375007 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.376939 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.406429 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.414839 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lg2h\" (UniqueName: \"kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h\") pod \"certified-operators-b4zvh\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.474154 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.474402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.474424 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrzp\" (UniqueName: \"kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.474526 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.474680 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:20.97466399 +0000 UTC m=+199.815286616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.489344 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.532646 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rh6fb" podStartSLOduration=140.532624794 podStartE2EDuration="2m20.532624794s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.517831533 +0000 UTC m=+199.358454179" watchObservedRunningTime="2025-11-22 07:13:20.532624794 +0000 UTC m=+199.373247420" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svrzp\" (UniqueName: \"kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576115 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.576964 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.577601 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.077580154 +0000 UTC m=+199.918202780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.600680 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.602081 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.612268 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.613801 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" podStartSLOduration=140.613761017 podStartE2EDuration="2m20.613761017s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.600401864 +0000 UTC m=+199.441024480" watchObservedRunningTime="2025-11-22 07:13:20.613761017 +0000 UTC m=+199.454383643" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.614164 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svrzp\" (UniqueName: \"kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp\") pod \"community-operators-9gwch\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.625491 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.636182 4853 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-9qfvq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.39:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.636618 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.39:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.677176 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.677460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.677551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.677619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67rjz\" (UniqueName: \"kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.677685 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.177657342 +0000 UTC m=+200.018279968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.679671 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" event={"ID":"23e292bd-15e6-4fc0-835e-2871bc0e9e8e","Type":"ContainerStarted","Data":"3db3ed90e5f032c41c18cbb945d85fb69b405ba650fd560317c70b4dbf1ea9d1"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.679733 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" event={"ID":"23e292bd-15e6-4fc0-835e-2871bc0e9e8e","Type":"ContainerStarted","Data":"86c0d366dca9c4edd53c89297d718d624b9363f8f09fafecd85b2bfe2cd884ba"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.686287 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" podStartSLOduration=140.686255045 podStartE2EDuration="2m20.686255045s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.675924345 +0000 UTC m=+199.516546961" watchObservedRunningTime="2025-11-22 07:13:20.686255045 +0000 UTC m=+199.526877671" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.710592 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" event={"ID":"0d10e537-edf1-40b9-a8a7-038237e48834","Type":"ContainerStarted","Data":"292db30fc33755a19f1cb740b56478244feff4f58ede5afe4ff6828ed693b4e9"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.743278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.782433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.782573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.782647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.782717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67rjz\" (UniqueName: \"kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.783894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.783902 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.283884456 +0000 UTC m=+200.124507292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.784720 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.812496 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" podStartSLOduration=140.812458832 podStartE2EDuration="2m20.812458832s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:20.798871602 +0000 UTC m=+199.639494248" watchObservedRunningTime="2025-11-22 07:13:20.812458832 +0000 UTC m=+199.653081458" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.816055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" event={"ID":"adaf4de5-0b3c-4b48-a232-45157864a0f7","Type":"ContainerStarted","Data":"79907e986f7668a7d975a32ab11e2d321162948bb31ac8f00d8f8d88bb7dfb42"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.831335 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67rjz\" (UniqueName: \"kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz\") pod \"certified-operators-k7t4t\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.840091 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bnk7x" event={"ID":"9546ad13-1c91-495a-865b-b3396a94e17e","Type":"ContainerStarted","Data":"b409001d2b7aa0315ac17910577f93fcfe454b0762b0f15c7fe9f0b907ab1f04"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.913120 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:20 crc kubenswrapper[4853]: E1122 07:13:20.914200 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.414175324 +0000 UTC m=+200.254797950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.926492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" event={"ID":"3960cd0a-8f4a-44de-a022-3858e1176a99","Type":"ContainerStarted","Data":"6a0dff664fca2ecacef015dccdc0693b459b8552820959716df6af1d887bd126"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.959825 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" event={"ID":"7e73b9e6-c1a8-411b-9360-32dc388d76f1","Type":"ContainerStarted","Data":"72ba6870ef0c62351b804b7354bcb387252ff435e473bd3034f8b254c49b7254"} Nov 22 07:13:20 crc kubenswrapper[4853]: I1122 07:13:20.961703 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" event={"ID":"9a046e1a-9a2f-472a-909e-12fdaa9db2f1","Type":"ContainerStarted","Data":"d2c975e8403fccb1b1792bc5d51828c5410faece8e1859a10123e95f530735f2"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.024787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.025946 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.027290 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.527251184 +0000 UTC m=+200.367873810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.037490 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" event={"ID":"e2ccfb3a-48b6-4367-abae-d5ac6d053f77","Type":"ContainerStarted","Data":"a4939117893c5f547987f5e01f04120670fb037db64dbfea329dcabc175efe8b"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.067643 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" event={"ID":"f75d360f-0e31-40e3-8b5d-d51934525efb","Type":"ContainerStarted","Data":"9898db82abb8d4911833f6d56bdcf02f3f972ace6c2a302d08e64736a6f71e79"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.132419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.133094 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.633043806 +0000 UTC m=+200.473666432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.133317 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.136054 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.636033737 +0000 UTC m=+200.476656363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.145336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" event={"ID":"2eb41230-c219-4968-a240-36db37f3d772","Type":"ContainerStarted","Data":"17ef642a7d394c24f937f1090f1af031250e22ea5d7bf8a2543e2de2ce653e67"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.197694 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" event={"ID":"05b7fb71-56a6-4875-a680-995a1a2194d6","Type":"ContainerStarted","Data":"244fba524578322ecbf34044c28931735af734f1c6de7a3efaec85f861731d43"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.218849 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" event={"ID":"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc","Type":"ContainerStarted","Data":"3f6925ea92175ec909d297d881bfa02c835a129c59b7784f5e83c1bffd0c6b12"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.230862 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" podStartSLOduration=141.230838442 podStartE2EDuration="2m21.230838442s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.2286053 +0000 UTC m=+200.069227936" watchObservedRunningTime="2025-11-22 07:13:21.230838442 +0000 UTC m=+200.071461088" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.232480 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t9fmp" event={"ID":"87d08723-fac8-48ca-9255-848a0e659721","Type":"ContainerStarted","Data":"1f993078f5bb23494a14f7a02e07afac509237428ecdcdb8a2761500699b11c9"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.244440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.247093 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.747062672 +0000 UTC m=+200.587685298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.267088 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" event={"ID":"f5873a23-2127-4288-8d68-6d12756368b5","Type":"ContainerStarted","Data":"30765b94ba5fd7c308429a0e00fb33420e33e30596d0bc4f064c0dad73661c9f"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.277846 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" event={"ID":"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f","Type":"ContainerStarted","Data":"78757a05a952ce39037468a3f750c3b7bfbeb26a836223921f60066b0d935f00"} Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.277900 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.282121 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.282482 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.295225 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bzv6w" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.316361 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" podStartSLOduration=142.316336473 podStartE2EDuration="2m22.316336473s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.281596099 +0000 UTC m=+200.122218715" watchObservedRunningTime="2025-11-22 07:13:21.316336473 +0000 UTC m=+200.156959099" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.322244 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-djgfn" podStartSLOduration=141.322222142 podStartE2EDuration="2m21.322222142s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.318903813 +0000 UTC m=+200.159526429" watchObservedRunningTime="2025-11-22 07:13:21.322222142 +0000 UTC m=+200.162844788" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.353517 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.365717 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.865699753 +0000 UTC m=+200.706322379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.367610 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" podStartSLOduration=141.367580455 podStartE2EDuration="2m21.367580455s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.359424783 +0000 UTC m=+200.200047409" watchObservedRunningTime="2025-11-22 07:13:21.367580455 +0000 UTC m=+200.208203091" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.391517 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.403072 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cjq85" podStartSLOduration=141.403039147 podStartE2EDuration="2m21.403039147s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.394637479 +0000 UTC m=+200.235260115" watchObservedRunningTime="2025-11-22 07:13:21.403039147 +0000 UTC m=+200.243661773" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.437798 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bnk7x" podStartSLOduration=8.43774343 podStartE2EDuration="8.43774343s" podCreationTimestamp="2025-11-22 07:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.43629668 +0000 UTC m=+200.276919306" watchObservedRunningTime="2025-11-22 07:13:21.43774343 +0000 UTC m=+200.278366056" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.456142 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.456701 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:21.956679193 +0000 UTC m=+200.797301819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.499218 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zqjqq" podStartSLOduration=141.499197388 podStartE2EDuration="2m21.499197388s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.495899228 +0000 UTC m=+200.336521854" watchObservedRunningTime="2025-11-22 07:13:21.499197388 +0000 UTC m=+200.339820014" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.566980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.568162 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.06814618 +0000 UTC m=+200.908768796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.623978 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.668493 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.669204 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.169183073 +0000 UTC m=+201.009805699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.711240 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hktm5" podStartSLOduration=141.711222435 podStartE2EDuration="2m21.711222435s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:21.709315443 +0000 UTC m=+200.549938069" watchObservedRunningTime="2025-11-22 07:13:21.711222435 +0000 UTC m=+200.551845061" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.777248 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.777777 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.277736171 +0000 UTC m=+201.118358797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.897112 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.897398 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.397372609 +0000 UTC m=+201.237995235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.898148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:21 crc kubenswrapper[4853]: E1122 07:13:21.898629 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.398607042 +0000 UTC m=+201.239229668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.915667 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.951444 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.952588 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.960882 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:13:21 crc kubenswrapper[4853]: I1122 07:13:21.969578 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.004881 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.005544 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.505521425 +0000 UTC m=+201.346144051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.042579 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.043333 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.048404 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.048871 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.051642 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.103944 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.105952 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.106072 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.106135 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfvfv\" (UniqueName: \"kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.106176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.106218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.106560 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.606547208 +0000 UTC m=+201.447169834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: W1122 07:13:22.115974 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30cbccc7_41e5_46d2_b805_bbb03b8bb67c.slice/crio-b98febe14372212dfa466687e7db671005c266bac32bbe15b78e216f0b785da1 WatchSource:0}: Error finding container b98febe14372212dfa466687e7db671005c266bac32bbe15b78e216f0b785da1: Status 404 returned error can't find the container with id b98febe14372212dfa466687e7db671005c266bac32bbe15b78e216f0b785da1 Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208130 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfvfv\" (UniqueName: \"kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208656 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208726 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.208947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.209539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.209654 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.709635227 +0000 UTC m=+201.550257853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.210300 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.259071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfvfv\" (UniqueName: \"kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv\") pod \"redhat-marketplace-4z6bc\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.278661 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7jnds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.278761 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podUID="472b3cc8-386e-4828-a725-263057fb299b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.292858 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" event={"ID":"05b7fb71-56a6-4875-a680-995a1a2194d6","Type":"ContainerStarted","Data":"d0e03c48da592738138667b53d8fbfe2397be39727bbab50ba09ea1eb495b991"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.296464 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerStarted","Data":"abb8bb4f9b66ac5cf2d6f377b4c8cf4c27f07c6c8d836d1db603dedf014358d9"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.302259 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-99wl6" event={"ID":"a1a696f2-274f-4b1c-9212-fc280920f69f","Type":"ContainerStarted","Data":"5ca82fecc2597b97feec3affd0d49b8b29200dac9384de6e34be2a5efac76761"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.304269 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.304493 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" event={"ID":"7e73b9e6-c1a8-411b-9360-32dc388d76f1","Type":"ContainerStarted","Data":"1eb7ca64538db19093c4609e8c1eaa6eee688358495539efae025a14e36ba1c9"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.305856 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerStarted","Data":"7e44e6ed2b4f392aa935319d3fba61fff428961451a3d7acea225d90811a6372"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.311043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.311258 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.311645 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.811608536 +0000 UTC m=+201.652231162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.311950 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.312026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.315630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" event={"ID":"2454431f-55ed-4abb-b70f-9382007e9026","Type":"ContainerStarted","Data":"a61325768cdbeddc4029469d52897da31322a25f99a9117627bf20c48950f7ab"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.316107 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.322867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" event={"ID":"8c51e400-95dc-4b1b-ab28-e3f2e5780758","Type":"ContainerStarted","Data":"303ff6071cb105c1363d90d877e74d9f52c5fec70c35ae12627616b15f4312c1"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.325430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h2jnh" event={"ID":"6c313448-9287-4014-b36e-ae4e14b9ee4e","Type":"ContainerStarted","Data":"47ff46a2ce2892c215cc10d51830b9f6dd57db850dc95f8f57188f971b5de323"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.333265 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-n6vz6" podStartSLOduration=142.333244833 podStartE2EDuration="2m22.333244833s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.330969502 +0000 UTC m=+201.171592148" watchObservedRunningTime="2025-11-22 07:13:22.333244833 +0000 UTC m=+201.173867459" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.336249 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-jfgsj" event={"ID":"e2ccfb3a-48b6-4367-abae-d5ac6d053f77","Type":"ContainerStarted","Data":"a2344f8b30f67373f837dbdde96ca0837b84a2772b5747cdec03a49fe4699a11"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.338422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" event={"ID":"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc","Type":"ContainerStarted","Data":"fc14df829aaa8de5d98e277ca0b0264dd4fab417c2ddd11c50ac00d38543b964"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.338962 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.340694 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.341592 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gwwg5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.341733 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.345824 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-t9fmp" event={"ID":"87d08723-fac8-48ca-9255-848a0e659721","Type":"ContainerStarted","Data":"5ce1ce1783cd43134c6837187a437a95ee193173dfcc7274582b09a2459aae3b"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.362814 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.363826 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerStarted","Data":"3f1dd7437e6a5f83eb4e7bc95ce38a31028e16cb0f72f9f3ceab5e9be0d91f94"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.363928 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.376471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.377411 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" event={"ID":"913eeba3-a280-4ffa-a61a-febff59fcc2e","Type":"ContainerStarted","Data":"5fa238ff83b8c97f18025300b1a987980178960bdcb5fcc984f67f8878a96ec2"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.390179 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" podStartSLOduration=142.390154069 podStartE2EDuration="2m22.390154069s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.38249135 +0000 UTC m=+201.223113976" watchObservedRunningTime="2025-11-22 07:13:22.390154069 +0000 UTC m=+201.230776695" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.401796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerStarted","Data":"b98febe14372212dfa466687e7db671005c266bac32bbe15b78e216f0b785da1"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.413280 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.413628 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" event={"ID":"83b3203a-f1e8-4d8e-8c42-4932026537ee","Type":"ContainerStarted","Data":"42300350ff01503858b00d6e11761cee955a989ec62e9e3907cc3c3736d4d5eb"} Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.414998 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:22.914969683 +0000 UTC m=+201.755592309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.424571 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.433636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" event={"ID":"f5873a23-2127-4288-8d68-6d12756368b5","Type":"ContainerStarted","Data":"457e2d445355c21d266e5f4525e88c13c80ef3b819d5e9bbbb60132e71601f3e"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.447804 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h2jnh" podStartSLOduration=142.447786043 podStartE2EDuration="2m22.447786043s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.44323192 +0000 UTC m=+201.283854556" watchObservedRunningTime="2025-11-22 07:13:22.447786043 +0000 UTC m=+201.288408669" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.453995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" event={"ID":"29a2c0ca-4d4d-4a05-a4bb-96b05720f59f","Type":"ContainerStarted","Data":"57380db586ab3bce07de263d92724da4e538f2db2d400d9e85a0b86e2f562c1b"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.458656 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" event={"ID":"23e292bd-15e6-4fc0-835e-2871bc0e9e8e","Type":"ContainerStarted","Data":"f6de50650b00e66b6116f03905dace08888c9c6e5205f0e238e5b977d771124c"} Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.517193 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.517271 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.517370 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkhc\" (UniqueName: \"kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.517391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.519932 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.019916642 +0000 UTC m=+201.860539268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.530145 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-h486l" podStartSLOduration=142.530122549 podStartE2EDuration="2m22.530122549s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.519867161 +0000 UTC m=+201.360489787" watchObservedRunningTime="2025-11-22 07:13:22.530122549 +0000 UTC m=+201.370745175" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.618577 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.618954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.619187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghkhc\" (UniqueName: \"kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.619227 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.620677 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.120659537 +0000 UTC m=+201.961282163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.630314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.634079 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.659975 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-696ts" podStartSLOduration=142.659953824 podStartE2EDuration="2m22.659953824s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.658617888 +0000 UTC m=+201.499240524" watchObservedRunningTime="2025-11-22 07:13:22.659953824 +0000 UTC m=+201.500576450" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.660972 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t2flg" podStartSLOduration=142.660964141 podStartE2EDuration="2m22.660964141s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.561062989 +0000 UTC m=+201.401685615" watchObservedRunningTime="2025-11-22 07:13:22.660964141 +0000 UTC m=+201.501586767" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.688643 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.689274 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.689320 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.694001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghkhc\" (UniqueName: \"kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc\") pod \"redhat-marketplace-wfjvm\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.723228 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.723678 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.223659154 +0000 UTC m=+202.064281780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.755180 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" podStartSLOduration=142.755160239 podStartE2EDuration="2m22.755160239s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.697159844 +0000 UTC m=+201.537782470" watchObservedRunningTime="2025-11-22 07:13:22.755160239 +0000 UTC m=+201.595782865" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.807681 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-t9fmp" podStartSLOduration=9.807660224 podStartE2EDuration="9.807660224s" podCreationTimestamp="2025-11-22 07:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.805946128 +0000 UTC m=+201.646568764" watchObservedRunningTime="2025-11-22 07:13:22.807660224 +0000 UTC m=+201.648282850" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.824353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.824979 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.324961745 +0000 UTC m=+202.165584371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.834422 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-22nw6" podStartSLOduration=142.83440022 podStartE2EDuration="2m22.83440022s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.831855231 +0000 UTC m=+201.672477867" watchObservedRunningTime="2025-11-22 07:13:22.83440022 +0000 UTC m=+201.675022846" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.926493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:22 crc kubenswrapper[4853]: E1122 07:13:22.927064 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.427047096 +0000 UTC m=+202.267669712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.977876 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-9kg95" podStartSLOduration=142.977855096 podStartE2EDuration="2m22.977855096s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:22.882192608 +0000 UTC m=+201.722815244" watchObservedRunningTime="2025-11-22 07:13:22.977855096 +0000 UTC m=+201.818477722" Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.979169 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:13:22 crc kubenswrapper[4853]: I1122 07:13:22.985089 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.027667 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.028123 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.52810701 +0000 UTC m=+202.368729636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.130805 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.131383 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.631359153 +0000 UTC m=+202.471981949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.231923 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.232366 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.232659 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.732640164 +0000 UTC m=+202.573262800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.236148 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.261710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9cc2bf97-eb39-4b0c-abda-99b49bb530fd-metrics-certs\") pod \"network-metrics-daemon-pd6gs\" (UID: \"9cc2bf97-eb39-4b0c-abda-99b49bb530fd\") " pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.279634 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.328940 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.334490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.334959 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.834942601 +0000 UTC m=+202.675565227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.365423 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.366669 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.367546 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.369496 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.376916 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pd6gs" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.384842 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.438806 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.439042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.439191 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.439215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjggf\" (UniqueName: \"kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.439365 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:23.939345825 +0000 UTC m=+202.779968451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.464623 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7jnds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.464702 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podUID="472b3cc8-386e-4828-a725-263057fb299b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.477982 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa","Type":"ContainerStarted","Data":"61030eac1bee2d43ff7e66cdaf432b25b0c1650dba1f5bb5ff9949871ca4b724"} Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.488332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerStarted","Data":"36a0cef9a28378820b968eaf4f3de291f99d28bf7f1af1e70581fc0d4f092229"} Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.493865 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerStarted","Data":"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c"} Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.497124 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerStarted","Data":"b3878bfbc583bc65a94e11d667aefca3994c3d5d7d02004bebd92ea20c7a101e"} Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.499825 4853 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bqk2r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.499891 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" podUID="2454431f-55ed-4abb-b70f-9382007e9026" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.500367 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gwwg5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.500402 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.541768 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.542393 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.542421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjggf\" (UniqueName: \"kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.542479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.543542 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.544000 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.043983387 +0000 UTC m=+202.884606013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.544501 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.594296 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjggf\" (UniqueName: \"kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf\") pod \"redhat-operators-klwzw\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.644373 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.646294 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.146269604 +0000 UTC m=+202.986892230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.684048 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.684112 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.687546 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.746619 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.747474 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.247445731 +0000 UTC m=+203.088068357 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.764787 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" podStartSLOduration=144.764735641 podStartE2EDuration="2m24.764735641s" podCreationTimestamp="2025-11-22 07:10:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:23.53749409 +0000 UTC m=+202.378116736" watchObservedRunningTime="2025-11-22 07:13:23.764735641 +0000 UTC m=+202.605358267" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.770201 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.776327 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.788691 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pd6gs"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.790512 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.848304 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.848547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.848613 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.848641 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wfq\" (UniqueName: \"kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.849021 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.348988468 +0000 UTC m=+203.189611094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.952155 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wfq\" (UniqueName: \"kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.952280 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.952326 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.952385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: E1122 07:13:23.953159 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.453139456 +0000 UTC m=+203.293762082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.953994 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.954055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.981408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wfq\" (UniqueName: \"kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq\") pod \"redhat-operators-fx9sl\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:23 crc kubenswrapper[4853]: I1122 07:13:23.988616 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.054186 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.054390 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.554360355 +0000 UTC m=+203.394982981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.054516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.055137 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.555129455 +0000 UTC m=+203.395752081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.124965 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.160680 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.161135 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.661103063 +0000 UTC m=+203.501725689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.262658 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.263153 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.763135923 +0000 UTC m=+203.603758549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.343932 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.364880 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.365731 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.865708468 +0000 UTC m=+203.706331094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.467866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.468349 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:24.968326984 +0000 UTC m=+203.808949610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.512304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-99wl6" event={"ID":"a1a696f2-274f-4b1c-9212-fc280920f69f","Type":"ContainerStarted","Data":"2d6fd518bc8b5fe332dfd76a8e84cd3446763dbaa2b0b967ed76be3404d4cdaa"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.515525 4853 generic.go:334] "Generic (PLEG): container finished" podID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerID="90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c" exitCode=0 Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.515662 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerDied","Data":"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.519770 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.520017 4853 generic.go:334] "Generic (PLEG): container finished" podID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerID="8d77d7e22e6011d7944d1b73d7189e71f1a9abca66043265f8500c11d25ae5ae" exitCode=0 Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.520118 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerDied","Data":"8d77d7e22e6011d7944d1b73d7189e71f1a9abca66043265f8500c11d25ae5ae"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.523221 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerStarted","Data":"4c8562832d4c146788aa6775494a35f49c77b82513395c18a74bbdc09c23bc59"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.526254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" event={"ID":"9cc2bf97-eb39-4b0c-abda-99b49bb530fd","Type":"ContainerStarted","Data":"4c2e0ee57e6d771a9ba916eb5a697473972da9c0b273654bb79b9613f7cc68a1"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.528145 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerStarted","Data":"3c1c5c2ba629e0e64d79fc45329296bd18baabec417372171a8475ed5d8c8ab1"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.529911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerStarted","Data":"7a7151e0fbfe8c259f178a774826e282af20e9d72f0190989970ddc47ee9871f"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.543533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerStarted","Data":"d526940204a98c51ae087c06d2b92d34256d7274fa7bef4bcb58d72b1ef1276a"} Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.544603 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gwwg5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.544694 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.571571 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.572177 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.072155063 +0000 UTC m=+203.912777689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.674117 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.674663 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.174642906 +0000 UTC m=+204.015265532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.683321 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.683473 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.776068 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.776285 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.276255825 +0000 UTC m=+204.116878451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.776649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.777090 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.277080867 +0000 UTC m=+204.117703493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.878218 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.878492 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.378429779 +0000 UTC m=+204.219052405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.878840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.879394 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.379379795 +0000 UTC m=+204.220002421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.980656 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.980933 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.480901031 +0000 UTC m=+204.321523657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:24 crc kubenswrapper[4853]: I1122 07:13:24.981008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:24 crc kubenswrapper[4853]: E1122 07:13:24.981514 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.481487867 +0000 UTC m=+204.322110493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.081968 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.082104 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.582079209 +0000 UTC m=+204.422701835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.082430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.082781 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.582772967 +0000 UTC m=+204.423395593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.147068 4853 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bqk2r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.147157 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" podUID="2454431f-55ed-4abb-b70f-9382007e9026" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.147494 4853 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-bqk2r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.147591 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" podUID="2454431f-55ed-4abb-b70f-9382007e9026" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.184129 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.184391 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.684355556 +0000 UTC m=+204.524978202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.184581 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.185083 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.685071555 +0000 UTC m=+204.525694261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.286317 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.286592 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.78654629 +0000 UTC m=+204.627168916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.286686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.287114 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.787094675 +0000 UTC m=+204.627717481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.348489 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.348970 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.350313 4853 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xmnqz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.350410 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" podUID="05b7fb71-56a6-4875-a680-995a1a2194d6" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.364687 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.364774 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.366894 4853 patch_prober.go:28] interesting pod/console-f9d7485db-5nds5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.366959 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-5nds5" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.389109 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.389434 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.889400232 +0000 UTC m=+204.730022858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.389617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.390219 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.890193604 +0000 UTC m=+204.730816230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.418787 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.419063 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.426729 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.491103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.491462 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.991422703 +0000 UTC m=+204.832045329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.491731 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.493230 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:25.993210831 +0000 UTC m=+204.833833657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.547311 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.547396 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.547411 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.547484 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.551412 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerStarted","Data":"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.557367 4853 generic.go:334] "Generic (PLEG): container finished" podID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerID="10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44" exitCode=0 Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.557459 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerDied","Data":"10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.559048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa","Type":"ContainerStarted","Data":"de979758d76de3e160c5155d7bb94e779552bfce53b6088ee60dc21718f7b95b"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.563295 4853 generic.go:334] "Generic (PLEG): container finished" podID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerID="023df8ad7d3b428ca68b4657b8f182d601d08dc192f24082658845e04bf5d75e" exitCode=0 Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.563350 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerDied","Data":"023df8ad7d3b428ca68b4657b8f182d601d08dc192f24082658845e04bf5d75e"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.565828 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" event={"ID":"9cc2bf97-eb39-4b0c-abda-99b49bb530fd","Type":"ContainerStarted","Data":"6b632cd409db859a3218603fb18d0ec75524f94b22f5899b4c380f61ce982e43"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.567691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerStarted","Data":"86f478c6b96d883cc60f3dd4918f0e4c53aea142d09c77cea7199c187383fc87"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.569385 4853 generic.go:334] "Generic (PLEG): container finished" podID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerID="7a7151e0fbfe8c259f178a774826e282af20e9d72f0190989970ddc47ee9871f" exitCode=0 Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.569481 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerDied","Data":"7a7151e0fbfe8c259f178a774826e282af20e9d72f0190989970ddc47ee9871f"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.571389 4853 generic.go:334] "Generic (PLEG): container finished" podID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerID="d526940204a98c51ae087c06d2b92d34256d7274fa7bef4bcb58d72b1ef1276a" exitCode=0 Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.573126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerDied","Data":"d526940204a98c51ae087c06d2b92d34256d7274fa7bef4bcb58d72b1ef1276a"} Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.573245 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.583152 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dvcg6" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.593461 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.595287 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.095260182 +0000 UTC m=+204.935883238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.693422 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-99wl6" podStartSLOduration=12.693397136 podStartE2EDuration="12.693397136s" podCreationTimestamp="2025-11-22 07:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:25.692869082 +0000 UTC m=+204.533491698" watchObservedRunningTime="2025-11-22 07:13:25.693397136 +0000 UTC m=+204.534019762" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.703161 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.703580 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:25 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:25 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:25 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.703701 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.704789 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.204765515 +0000 UTC m=+205.045388131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.804918 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.805406 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.305381087 +0000 UTC m=+205.146003713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:25 crc kubenswrapper[4853]: I1122 07:13:25.907444 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:25 crc kubenswrapper[4853]: E1122 07:13:25.907875 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.407860829 +0000 UTC m=+205.248483455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.010274 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.010707 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.510680881 +0000 UTC m=+205.351303507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.112134 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.112926 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.612896507 +0000 UTC m=+205.453519253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.213333 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.213536 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.713494498 +0000 UTC m=+205.554117134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.214604 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.215181 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.715146493 +0000 UTC m=+205.555769299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.316911 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.317149 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.817108992 +0000 UTC m=+205.657731618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.317620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.318094 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.818085607 +0000 UTC m=+205.658708233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.418861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.419136 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.91909285 +0000 UTC m=+205.759715486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.419217 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.420181 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:26.920170539 +0000 UTC m=+205.760793165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.521215 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.521422 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.021385648 +0000 UTC m=+205.862008264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.521665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.522088 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.022079166 +0000 UTC m=+205.862701792 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.579444 4853 generic.go:334] "Generic (PLEG): container finished" podID="ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" containerID="de979758d76de3e160c5155d7bb94e779552bfce53b6088ee60dc21718f7b95b" exitCode=0 Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.579526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa","Type":"ContainerDied","Data":"de979758d76de3e160c5155d7bb94e779552bfce53b6088ee60dc21718f7b95b"} Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.581583 4853 generic.go:334] "Generic (PLEG): container finished" podID="adaf4de5-0b3c-4b48-a232-45157864a0f7" containerID="79907e986f7668a7d975a32ab11e2d321162948bb31ac8f00d8f8d88bb7dfb42" exitCode=0 Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.581678 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" event={"ID":"adaf4de5-0b3c-4b48-a232-45157864a0f7","Type":"ContainerDied","Data":"79907e986f7668a7d975a32ab11e2d321162948bb31ac8f00d8f8d88bb7dfb42"} Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.583361 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a52e070-929c-4194-8197-d66d88780fdc" containerID="86f478c6b96d883cc60f3dd4918f0e4c53aea142d09c77cea7199c187383fc87" exitCode=0 Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.583446 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerDied","Data":"86f478c6b96d883cc60f3dd4918f0e4c53aea142d09c77cea7199c187383fc87"} Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.585423 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce89388a-728c-4afc-b155-2813e35a8413" containerID="de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c" exitCode=0 Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.585548 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerDied","Data":"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c"} Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.623313 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.623834 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.123807119 +0000 UTC m=+205.964429735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.688363 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:26 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:26 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:26 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.688437 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.725020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.727671 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.227653068 +0000 UTC m=+206.068275894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.826248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.826420 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.326389299 +0000 UTC m=+206.167011925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.826519 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.826932 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.326924564 +0000 UTC m=+206.167547190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.928401 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.928578 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.428557793 +0000 UTC m=+206.269180429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.928685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:26 crc kubenswrapper[4853]: E1122 07:13:26.929104 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.429091328 +0000 UTC m=+206.269713954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:26 crc kubenswrapper[4853]: I1122 07:13:26.954996 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.031138 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.031844 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.531821877 +0000 UTC m=+206.372444503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.134047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.134952 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.634934257 +0000 UTC m=+206.475556883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.203493 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.235003 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.235731 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.735691412 +0000 UTC m=+206.576314038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.236562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.237066 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.737052169 +0000 UTC m=+206.577674785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.337554 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.339712 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.839690406 +0000 UTC m=+206.680313032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.439999 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.440422 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:27.940409611 +0000 UTC m=+206.781032227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.542060 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.543430 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.043369616 +0000 UTC m=+206.883992242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.598213 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pd6gs" event={"ID":"9cc2bf97-eb39-4b0c-abda-99b49bb530fd","Type":"ContainerStarted","Data":"36c07047ae69b7343daec89b43bda80c28c3115d692ac8bf1cf8ba4446dc755f"} Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.618598 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pd6gs" podStartSLOduration=147.618495876 podStartE2EDuration="2m27.618495876s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:27.615497594 +0000 UTC m=+206.456120220" watchObservedRunningTime="2025-11-22 07:13:27.618495876 +0000 UTC m=+206.459118502" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.646300 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.646704 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.146692061 +0000 UTC m=+206.987314687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.682285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.685201 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:27 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:27 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:27 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.685278 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.747843 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.749289 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.249259116 +0000 UTC m=+207.089881742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.849957 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.851716 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.351687758 +0000 UTC m=+207.192310554 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.934791 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.948374 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:27 crc kubenswrapper[4853]: I1122 07:13:27.952991 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:27 crc kubenswrapper[4853]: E1122 07:13:27.953527 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.453511362 +0000 UTC m=+207.294133988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.054587 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vg24r\" (UniqueName: \"kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r\") pod \"adaf4de5-0b3c-4b48-a232-45157864a0f7\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055053 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir\") pod \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055120 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access\") pod \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\" (UID: \"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055175 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume\") pod \"adaf4de5-0b3c-4b48-a232-45157864a0f7\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume\") pod \"adaf4de5-0b3c-4b48-a232-45157864a0f7\" (UID: \"adaf4de5-0b3c-4b48-a232-45157864a0f7\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" (UID: "ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.055678 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.056255 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.556213011 +0000 UTC m=+207.396835637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.057099 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "adaf4de5-0b3c-4b48-a232-45157864a0f7" (UID: "adaf4de5-0b3c-4b48-a232-45157864a0f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.067434 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r" (OuterVolumeSpecName: "kube-api-access-vg24r") pod "adaf4de5-0b3c-4b48-a232-45157864a0f7" (UID: "adaf4de5-0b3c-4b48-a232-45157864a0f7"). InnerVolumeSpecName "kube-api-access-vg24r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.067846 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" (UID: "ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.072060 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "adaf4de5-0b3c-4b48-a232-45157864a0f7" (UID: "adaf4de5-0b3c-4b48-a232-45157864a0f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.157598 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.158042 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.158073 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/adaf4de5-0b3c-4b48-a232-45157864a0f7-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.158086 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/adaf4de5-0b3c-4b48-a232-45157864a0f7-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.158120 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.658085416 +0000 UTC m=+207.498708042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.158189 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vg24r\" (UniqueName: \"kubernetes.io/projected/adaf4de5-0b3c-4b48-a232-45157864a0f7-kube-api-access-vg24r\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.259857 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.260285 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.760272131 +0000 UTC m=+207.600894757 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.361365 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.361851 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.861830298 +0000 UTC m=+207.702452924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.416458 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-bqk2r" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.463500 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.464099 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:28.964081355 +0000 UTC m=+207.804703981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.564982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.565199 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.065163159 +0000 UTC m=+207.905785785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.565421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.565831 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.065812737 +0000 UTC m=+207.906435363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.603254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" event={"ID":"adaf4de5-0b3c-4b48-a232-45157864a0f7","Type":"ContainerDied","Data":"089d9f12a515b48b6322b0f2126e0bd80c9b9351704fa04e4630ac77b518d905"} Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.603291 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.603304 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089d9f12a515b48b6322b0f2126e0bd80c9b9351704fa04e4630ac77b518d905" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.605392 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa","Type":"ContainerDied","Data":"61030eac1bee2d43ff7e66cdaf432b25b0c1650dba1f5bb5ff9949871ca4b724"} Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.605449 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.605468 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61030eac1bee2d43ff7e66cdaf432b25b0c1650dba1f5bb5ff9949871ca4b724" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.665977 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.666209 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.166174831 +0000 UTC m=+208.006797457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.666450 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.667018 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.167008505 +0000 UTC m=+208.007631131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.685818 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:28 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:28 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:28 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.686032 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.768131 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.768400 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.268358536 +0000 UTC m=+208.108981162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.768719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.769227 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.269211189 +0000 UTC m=+208.109833815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.870136 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.870519 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.370476379 +0000 UTC m=+208.211099005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.870618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.871197 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.371178558 +0000 UTC m=+208.211801314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.974221 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.974505 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.474462903 +0000 UTC m=+208.315085529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:28 crc kubenswrapper[4853]: I1122 07:13:28.974586 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:28 crc kubenswrapper[4853]: E1122 07:13:28.975232 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.475205592 +0000 UTC m=+208.315828368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.075959 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.076608 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.576576364 +0000 UTC m=+208.417198990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.077550 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.078972 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.578944119 +0000 UTC m=+208.419566745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.178648 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.178863 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.678821621 +0000 UTC m=+208.519444247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.280786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.281322 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.781300194 +0000 UTC m=+208.621922820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.381901 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.382238 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.882199843 +0000 UTC m=+208.722822479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.483486 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.484029 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:29.984007117 +0000 UTC m=+208.824629743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.561538 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.562379 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adaf4de5-0b3c-4b48-a232-45157864a0f7" containerName="collect-profiles" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.562401 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adaf4de5-0b3c-4b48-a232-45157864a0f7" containerName="collect-profiles" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.562416 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" containerName="pruner" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.562425 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" containerName="pruner" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.562563 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adaf4de5-0b3c-4b48-a232-45157864a0f7" containerName="collect-profiles" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.562600 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1b69bb-e10e-4e77-83ef-2467f1ffdcaa" containerName="pruner" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.563186 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.564911 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.565767 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.565768 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.585491 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.585686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.585802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.586031 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.086004196 +0000 UTC m=+208.926626822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687212 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:29 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:29 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:29 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687299 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687727 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.687813 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.688346 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.188309304 +0000 UTC m=+209.028931990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.722236 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.788793 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.788952 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.288925876 +0000 UTC m=+209.129548502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.789067 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.789551 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.289539743 +0000 UTC m=+209.130162369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.890046 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.890184 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.390163656 +0000 UTC m=+209.230786282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.890616 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.891017 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.391008908 +0000 UTC m=+209.231631534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.891132 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.991727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.992118 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.492037901 +0000 UTC m=+209.332660537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:29 crc kubenswrapper[4853]: I1122 07:13:29.992517 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:29 crc kubenswrapper[4853]: E1122 07:13:29.993015 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.492995317 +0000 UTC m=+209.333617953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.093408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.093720 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.59366782 +0000 UTC m=+209.434290446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.094250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.094949 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.594906773 +0000 UTC m=+209.435529399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.194893 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.195152 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.695106543 +0000 UTC m=+209.535729169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.195573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.195951 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.695936196 +0000 UTC m=+209.536558822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.296823 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.296986 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.796963539 +0000 UTC m=+209.637586165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.297135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.297512 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.797497743 +0000 UTC m=+209.638120369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.364184 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.374016 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xmnqz" Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.397868 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.398556 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:30.898512777 +0000 UTC m=+209.739135413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.499574 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.500137 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.000116255 +0000 UTC m=+209.840738881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.601248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.601490 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.101459527 +0000 UTC m=+209.942082153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.601686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.602050 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.102038133 +0000 UTC m=+209.942660759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.685497 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:30 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:30 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:30 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.685682 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.703998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.704131 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.204102054 +0000 UTC m=+210.044724680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.704777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.705282 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.205258055 +0000 UTC m=+210.045880681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.806169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.806342 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.306312379 +0000 UTC m=+210.146935005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.806578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.806945 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.306937226 +0000 UTC m=+210.147559852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.907195 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.907456 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.407406484 +0000 UTC m=+210.248029280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:30 crc kubenswrapper[4853]: I1122 07:13:30.907657 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:30 crc kubenswrapper[4853]: E1122 07:13:30.908391 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.40838132 +0000 UTC m=+210.249003946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.008699 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.008903 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.508872809 +0000 UTC m=+210.349495435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.009142 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.009586 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.509573678 +0000 UTC m=+210.350196304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.110234 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.110581 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.610554659 +0000 UTC m=+210.451177285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.211670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.212175 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.712148808 +0000 UTC m=+210.552771424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.297647 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.297781 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.313440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.313768 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.813711635 +0000 UTC m=+210.654334261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.314222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.314651 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.814642381 +0000 UTC m=+210.655265007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.376429 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.415508 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.415633 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.915608602 +0000 UTC m=+210.756231228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.416579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.417467 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:31.917457442 +0000 UTC m=+210.758080068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.517637 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.517979 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.01793823 +0000 UTC m=+210.858560856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.619385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.619837 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.119817237 +0000 UTC m=+210.960439863 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.638630 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" event={"ID":"8c51e400-95dc-4b1b-ab28-e3f2e5780758","Type":"ContainerStarted","Data":"67cf2b65a3471cc6081f5a5c66287f59df4aaadbb804b61c6adb641d563a7e48"} Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.684616 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:31 crc kubenswrapper[4853]: [-]has-synced failed: reason withheld Nov 22 07:13:31 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:31 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.684693 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.720388 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.721136 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.221100056 +0000 UTC m=+211.061722692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.807295 4853 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.822205 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.822834 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.322809958 +0000 UTC m=+211.163432614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:31 crc kubenswrapper[4853]: I1122 07:13:31.923323 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:31 crc kubenswrapper[4853]: E1122 07:13:31.924232 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.424206201 +0000 UTC m=+211.264828847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.024909 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:32 crc kubenswrapper[4853]: E1122 07:13:32.025364 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.525343458 +0000 UTC m=+211.365966124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.107524 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-99wl6" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.128034 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4853]: E1122 07:13:32.128615 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.62859806 +0000 UTC m=+211.469220686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.225575 4853 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-22T07:13:31.807330728Z","Handler":null,"Name":""} Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.229906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:32 crc kubenswrapper[4853]: E1122 07:13:32.230224 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:13:32.73021098 +0000 UTC m=+211.570833606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-2p6qj" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.246317 4853 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.246409 4853 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.331129 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.336116 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.433069 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.436258 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.436303 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.553074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-2p6qj\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.682247 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.685326 4853 patch_prober.go:28] interesting pod/router-default-5444994796-h2jnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:13:32 crc kubenswrapper[4853]: [+]has-synced ok Nov 22 07:13:32 crc kubenswrapper[4853]: [+]process-running ok Nov 22 07:13:32 crc kubenswrapper[4853]: healthz check failed Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.685408 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h2jnh" podUID="6c313448-9287-4014-b36e-ae4e14b9ee4e" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:13:32 crc kubenswrapper[4853]: I1122 07:13:32.688317 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:33 crc kubenswrapper[4853]: I1122 07:13:33.685435 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:33 crc kubenswrapper[4853]: I1122 07:13:33.688740 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h2jnh" Nov 22 07:13:33 crc kubenswrapper[4853]: I1122 07:13:33.813518 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.081065 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.240442 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:13:34 crc kubenswrapper[4853]: W1122 07:13:34.269363 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda423d848_68b1_49f1_af43_17d8f79c9562.slice/crio-8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605 WatchSource:0}: Error finding container 8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605: Status 404 returned error can't find the container with id 8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605 Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.668119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" event={"ID":"541af556-5dce-45ed-bf9e-f6faf6b146ca","Type":"ContainerStarted","Data":"f4f39b93f94d6246c83cd61360244d28ad7d33d8c88382c36531634d21d2027c"} Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.668528 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" event={"ID":"541af556-5dce-45ed-bf9e-f6faf6b146ca","Type":"ContainerStarted","Data":"90a058e804145bfe1168c745466a923a062ec5370e5ed9af59db6a62a529e8ae"} Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.671843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" event={"ID":"8c51e400-95dc-4b1b-ab28-e3f2e5780758","Type":"ContainerStarted","Data":"698e526d1ef2a02df9d6283894fa5c94eec68e85b683091cea4860a95b480d0b"} Nov 22 07:13:34 crc kubenswrapper[4853]: I1122 07:13:34.674238 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a423d848-68b1-49f1-af43-17d8f79c9562","Type":"ContainerStarted","Data":"8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605"} Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.378266 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.383600 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.549559 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.549645 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.549827 4853 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpb7j container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.549861 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpb7j" podUID="bcd72804-cd09-4ec3-ae4a-f539958eb90c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.689153 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a423d848-68b1-49f1-af43-17d8f79c9562","Type":"ContainerStarted","Data":"0f47b7aff566e687beaadac059e8999484774461cc01c981433c73efb6f2a905"} Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.696041 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" event={"ID":"8c51e400-95dc-4b1b-ab28-e3f2e5780758","Type":"ContainerStarted","Data":"6f39a504e3e1b44c92e779dde97976aea668ca5a1dcbdf47fca2251bba42f76a"} Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.712660 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.712628392 podStartE2EDuration="6.712628392s" podCreationTimestamp="2025-11-22 07:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:35.704550223 +0000 UTC m=+214.545172849" watchObservedRunningTime="2025-11-22 07:13:35.712628392 +0000 UTC m=+214.553251018" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.729595 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-w6jpc" podStartSLOduration=22.729572062 podStartE2EDuration="22.729572062s" podCreationTimestamp="2025-11-22 07:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:35.726105419 +0000 UTC m=+214.566728045" watchObservedRunningTime="2025-11-22 07:13:35.729572062 +0000 UTC m=+214.570194688" Nov 22 07:13:35 crc kubenswrapper[4853]: I1122 07:13:35.750462 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" podStartSLOduration=155.750410658 podStartE2EDuration="2m35.750410658s" podCreationTimestamp="2025-11-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:13:35.748949819 +0000 UTC m=+214.589572445" watchObservedRunningTime="2025-11-22 07:13:35.750410658 +0000 UTC m=+214.591033284" Nov 22 07:13:36 crc kubenswrapper[4853]: I1122 07:13:36.706621 4853 generic.go:334] "Generic (PLEG): container finished" podID="a423d848-68b1-49f1-af43-17d8f79c9562" containerID="0f47b7aff566e687beaadac059e8999484774461cc01c981433c73efb6f2a905" exitCode=0 Nov 22 07:13:36 crc kubenswrapper[4853]: I1122 07:13:36.706724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a423d848-68b1-49f1-af43-17d8f79c9562","Type":"ContainerDied","Data":"0f47b7aff566e687beaadac059e8999484774461cc01c981433c73efb6f2a905"} Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.042665 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.149094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access\") pod \"a423d848-68b1-49f1-af43-17d8f79c9562\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.149665 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir\") pod \"a423d848-68b1-49f1-af43-17d8f79c9562\" (UID: \"a423d848-68b1-49f1-af43-17d8f79c9562\") " Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.149806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a423d848-68b1-49f1-af43-17d8f79c9562" (UID: "a423d848-68b1-49f1-af43-17d8f79c9562"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.150420 4853 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a423d848-68b1-49f1-af43-17d8f79c9562-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.183083 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a423d848-68b1-49f1-af43-17d8f79c9562" (UID: "a423d848-68b1-49f1-af43-17d8f79c9562"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.252508 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a423d848-68b1-49f1-af43-17d8f79c9562-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.747586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a423d848-68b1-49f1-af43-17d8f79c9562","Type":"ContainerDied","Data":"8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605"} Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.747638 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe17c4232d3b04a34feb01f88653a6d41c2aedd46939b2795bc5c8f3e237605" Nov 22 07:13:38 crc kubenswrapper[4853]: I1122 07:13:38.747782 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:13:42 crc kubenswrapper[4853]: I1122 07:13:42.689016 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:45 crc kubenswrapper[4853]: I1122 07:13:45.559453 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hpb7j" Nov 22 07:13:52 crc kubenswrapper[4853]: I1122 07:13:52.696663 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:13:54 crc kubenswrapper[4853]: E1122 07:13:54.617725 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:13:54 crc kubenswrapper[4853]: E1122 07:13:54.617991 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lg2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b4zvh_openshift-marketplace(30996d2a-faed-48ba-80d6-d86b88fd5282): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:13:54 crc kubenswrapper[4853]: E1122 07:13:54.621862 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b4zvh" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" Nov 22 07:13:56 crc kubenswrapper[4853]: I1122 07:13:56.071532 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2stwm" Nov 22 07:13:59 crc kubenswrapper[4853]: E1122 07:13:59.898341 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:13:59 crc kubenswrapper[4853]: E1122 07:13:59.898954 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67rjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k7t4t_openshift-marketplace(30cbccc7-41e5-46d2-b805-bbb03b8bb67c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:13:59 crc kubenswrapper[4853]: E1122 07:13:59.900190 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k7t4t" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" Nov 22 07:14:01 crc kubenswrapper[4853]: I1122 07:14:01.297851 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:14:01 crc kubenswrapper[4853]: I1122 07:14:01.299198 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.155614 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.156819 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfvfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4z6bc_openshift-marketplace(6240b5f2-c1bb-4478-8935-b2579e37e8af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.158031 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4z6bc" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.191702 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.191958 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hctnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sk8bz_openshift-marketplace(a81b49b7-c4a0-4397-8524-ffaa67583496): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.193129 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sk8bz" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.194391 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.194597 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghkhc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wfjvm_openshift-marketplace(34bd417d-67dc-4eb8-be82-c0e268ae3cd6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:18 crc kubenswrapper[4853]: E1122 07:14:18.195799 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wfjvm" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.542618 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sk8bz" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.603637 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.603876 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k4wfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fx9sl_openshift-marketplace(5a52e070-929c-4194-8197-d66d88780fdc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.605294 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fx9sl" podUID="5a52e070-929c-4194-8197-d66d88780fdc" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.648420 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fx9sl" podUID="5a52e070-929c-4194-8197-d66d88780fdc" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.729246 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.729903 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjggf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-klwzw_openshift-marketplace(ce89388a-728c-4afc-b155-2813e35a8413): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:14:21 crc kubenswrapper[4853]: E1122 07:14:21.731429 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-klwzw" podUID="ce89388a-728c-4afc-b155-2813e35a8413" Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.642555 4853 generic.go:334] "Generic (PLEG): container finished" podID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerID="401761bdae17a38cd53e5a9cac4c052a5f6a87dde696dcc7c1fd8d39d30b6bc6" exitCode=0 Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.642783 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerDied","Data":"401761bdae17a38cd53e5a9cac4c052a5f6a87dde696dcc7c1fd8d39d30b6bc6"} Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.645655 4853 generic.go:334] "Generic (PLEG): container finished" podID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerID="e4ea890fbda26c7ce4cb13e6c78064287ff7c452d59daba5b436d0ed101d21b6" exitCode=0 Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.645733 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerDied","Data":"e4ea890fbda26c7ce4cb13e6c78064287ff7c452d59daba5b436d0ed101d21b6"} Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.648808 4853 generic.go:334] "Generic (PLEG): container finished" podID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerID="ad140e888f74e50133625d29b0cd5503cdacd705ef6d777395b0631a3651587e" exitCode=0 Nov 22 07:14:22 crc kubenswrapper[4853]: I1122 07:14:22.648927 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerDied","Data":"ad140e888f74e50133625d29b0cd5503cdacd705ef6d777395b0631a3651587e"} Nov 22 07:14:22 crc kubenswrapper[4853]: E1122 07:14:22.652192 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-klwzw" podUID="ce89388a-728c-4afc-b155-2813e35a8413" Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.656893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerStarted","Data":"abf906d3ce7e9b6a605b6d0a2ead1c9f08dcc654f83c17f02b28628c11fe7044"} Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.660329 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerStarted","Data":"e34441b2dbd38b5f28a7f4d07b19e6449122a161d9dfc853cba3b144cf86ff33"} Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.666412 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerStarted","Data":"b9576d80a7d84ec9df7764b6890e954222207aca2744438d6198a0e34e8e2631"} Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.717022 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k7t4t" podStartSLOduration=6.046905838 podStartE2EDuration="1m3.716993391s" podCreationTimestamp="2025-11-22 07:13:20 +0000 UTC" firstStartedPulling="2025-11-22 07:13:25.570972663 +0000 UTC m=+204.411595289" lastFinishedPulling="2025-11-22 07:14:23.241060206 +0000 UTC m=+262.081682842" observedRunningTime="2025-11-22 07:14:23.681925848 +0000 UTC m=+262.522548484" watchObservedRunningTime="2025-11-22 07:14:23.716993391 +0000 UTC m=+262.557616017" Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.720576 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b4zvh" podStartSLOduration=6.107780151 podStartE2EDuration="1m3.720548661s" podCreationTimestamp="2025-11-22 07:13:20 +0000 UTC" firstStartedPulling="2025-11-22 07:13:25.575330481 +0000 UTC m=+204.415953107" lastFinishedPulling="2025-11-22 07:14:23.188098991 +0000 UTC m=+262.028721617" observedRunningTime="2025-11-22 07:14:23.713407131 +0000 UTC m=+262.554029767" watchObservedRunningTime="2025-11-22 07:14:23.720548661 +0000 UTC m=+262.561171367" Nov 22 07:14:23 crc kubenswrapper[4853]: I1122 07:14:23.737544 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gwch" podStartSLOduration=6.01138869 podStartE2EDuration="1m3.737515217s" podCreationTimestamp="2025-11-22 07:13:20 +0000 UTC" firstStartedPulling="2025-11-22 07:13:25.573957104 +0000 UTC m=+204.414579730" lastFinishedPulling="2025-11-22 07:14:23.300083641 +0000 UTC m=+262.140706257" observedRunningTime="2025-11-22 07:14:23.734333628 +0000 UTC m=+262.574956274" watchObservedRunningTime="2025-11-22 07:14:23.737515217 +0000 UTC m=+262.578137853" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.627611 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.629091 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.709710 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerStarted","Data":"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf"} Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.744202 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.744270 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.944390 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.946294 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:30 crc kubenswrapper[4853]: I1122 07:14:30.992500 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.026690 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.027797 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.068705 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.297412 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.297484 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.297545 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.298272 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.298339 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453" gracePeriod=600 Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.717585 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453" exitCode=0 Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.717816 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453"} Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.718987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572"} Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.721714 4853 generic.go:334] "Generic (PLEG): container finished" podID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerID="57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf" exitCode=0 Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.721952 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerDied","Data":"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf"} Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.724906 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerStarted","Data":"fa813139d0e11e491f6420a1067ba3a563f8d898194b8c5244c98e097c4b9e5f"} Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.775461 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:31 crc kubenswrapper[4853]: I1122 07:14:31.783725 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:32 crc kubenswrapper[4853]: I1122 07:14:32.744011 4853 generic.go:334] "Generic (PLEG): container finished" podID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerID="fa813139d0e11e491f6420a1067ba3a563f8d898194b8c5244c98e097c4b9e5f" exitCode=0 Nov 22 07:14:32 crc kubenswrapper[4853]: I1122 07:14:32.744225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerDied","Data":"fa813139d0e11e491f6420a1067ba3a563f8d898194b8c5244c98e097c4b9e5f"} Nov 22 07:14:33 crc kubenswrapper[4853]: I1122 07:14:33.387554 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:14:33 crc kubenswrapper[4853]: I1122 07:14:33.755885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerStarted","Data":"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39"} Nov 22 07:14:33 crc kubenswrapper[4853]: I1122 07:14:33.795519 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wfjvm" podStartSLOduration=6.243463125 podStartE2EDuration="1m11.795492839s" podCreationTimestamp="2025-11-22 07:13:22 +0000 UTC" firstStartedPulling="2025-11-22 07:13:26.587980446 +0000 UTC m=+205.428603072" lastFinishedPulling="2025-11-22 07:14:32.14001016 +0000 UTC m=+270.980632786" observedRunningTime="2025-11-22 07:14:33.793048521 +0000 UTC m=+272.633671147" watchObservedRunningTime="2025-11-22 07:14:33.795492839 +0000 UTC m=+272.636115465" Nov 22 07:14:34 crc kubenswrapper[4853]: I1122 07:14:34.760111 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k7t4t" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="registry-server" containerID="cri-o://abf906d3ce7e9b6a605b6d0a2ead1c9f08dcc654f83c17f02b28628c11fe7044" gracePeriod=2 Nov 22 07:14:35 crc kubenswrapper[4853]: I1122 07:14:35.789773 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:14:35 crc kubenswrapper[4853]: I1122 07:14:35.790717 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gwch" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="registry-server" containerID="cri-o://e34441b2dbd38b5f28a7f4d07b19e6449122a161d9dfc853cba3b144cf86ff33" gracePeriod=2 Nov 22 07:14:36 crc kubenswrapper[4853]: I1122 07:14:36.777784 4853 generic.go:334] "Generic (PLEG): container finished" podID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerID="abf906d3ce7e9b6a605b6d0a2ead1c9f08dcc654f83c17f02b28628c11fe7044" exitCode=0 Nov 22 07:14:36 crc kubenswrapper[4853]: I1122 07:14:36.777987 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerDied","Data":"abf906d3ce7e9b6a605b6d0a2ead1c9f08dcc654f83c17f02b28628c11fe7044"} Nov 22 07:14:36 crc kubenswrapper[4853]: I1122 07:14:36.781343 4853 generic.go:334] "Generic (PLEG): container finished" podID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerID="e34441b2dbd38b5f28a7f4d07b19e6449122a161d9dfc853cba3b144cf86ff33" exitCode=0 Nov 22 07:14:36 crc kubenswrapper[4853]: I1122 07:14:36.781377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerDied","Data":"e34441b2dbd38b5f28a7f4d07b19e6449122a161d9dfc853cba3b144cf86ff33"} Nov 22 07:14:36 crc kubenswrapper[4853]: I1122 07:14:36.912635 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.029001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67rjz\" (UniqueName: \"kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz\") pod \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.029084 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content\") pod \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.029219 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities\") pod \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\" (UID: \"30cbccc7-41e5-46d2-b805-bbb03b8bb67c\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.031152 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities" (OuterVolumeSpecName: "utilities") pod "30cbccc7-41e5-46d2-b805-bbb03b8bb67c" (UID: "30cbccc7-41e5-46d2-b805-bbb03b8bb67c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.040521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz" (OuterVolumeSpecName: "kube-api-access-67rjz") pod "30cbccc7-41e5-46d2-b805-bbb03b8bb67c" (UID: "30cbccc7-41e5-46d2-b805-bbb03b8bb67c"). InnerVolumeSpecName "kube-api-access-67rjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.113029 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.135888 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.135938 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67rjz\" (UniqueName: \"kubernetes.io/projected/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-kube-api-access-67rjz\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.237695 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content\") pod \"3daf1927-a46c-4be1-ace4-f62d448fb994\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.237894 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svrzp\" (UniqueName: \"kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp\") pod \"3daf1927-a46c-4be1-ace4-f62d448fb994\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.237951 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities\") pod \"3daf1927-a46c-4be1-ace4-f62d448fb994\" (UID: \"3daf1927-a46c-4be1-ace4-f62d448fb994\") " Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.238654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities" (OuterVolumeSpecName: "utilities") pod "3daf1927-a46c-4be1-ace4-f62d448fb994" (UID: "3daf1927-a46c-4be1-ace4-f62d448fb994"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.240828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp" (OuterVolumeSpecName: "kube-api-access-svrzp") pod "3daf1927-a46c-4be1-ace4-f62d448fb994" (UID: "3daf1927-a46c-4be1-ace4-f62d448fb994"). InnerVolumeSpecName "kube-api-access-svrzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.339403 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svrzp\" (UniqueName: \"kubernetes.io/projected/3daf1927-a46c-4be1-ace4-f62d448fb994-kube-api-access-svrzp\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.339452 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.433350 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30cbccc7-41e5-46d2-b805-bbb03b8bb67c" (UID: "30cbccc7-41e5-46d2-b805-bbb03b8bb67c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.440601 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30cbccc7-41e5-46d2-b805-bbb03b8bb67c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.790878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k7t4t" event={"ID":"30cbccc7-41e5-46d2-b805-bbb03b8bb67c","Type":"ContainerDied","Data":"b98febe14372212dfa466687e7db671005c266bac32bbe15b78e216f0b785da1"} Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.790948 4853 scope.go:117] "RemoveContainer" containerID="abf906d3ce7e9b6a605b6d0a2ead1c9f08dcc654f83c17f02b28628c11fe7044" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.790965 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k7t4t" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.795035 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gwch" event={"ID":"3daf1927-a46c-4be1-ace4-f62d448fb994","Type":"ContainerDied","Data":"abb8bb4f9b66ac5cf2d6f377b4c8cf4c27f07c6c8d836d1db603dedf014358d9"} Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.795254 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gwch" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.817129 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.822848 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k7t4t"] Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.881741 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3daf1927-a46c-4be1-ace4-f62d448fb994" (UID: "3daf1927-a46c-4be1-ace4-f62d448fb994"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:37 crc kubenswrapper[4853]: I1122 07:14:37.949804 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3daf1927-a46c-4be1-ace4-f62d448fb994-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.134791 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.146065 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gwch"] Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.595678 4853 scope.go:117] "RemoveContainer" containerID="e4ea890fbda26c7ce4cb13e6c78064287ff7c452d59daba5b436d0ed101d21b6" Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.619110 4853 scope.go:117] "RemoveContainer" containerID="7a7151e0fbfe8c259f178a774826e282af20e9d72f0190989970ddc47ee9871f" Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.639272 4853 scope.go:117] "RemoveContainer" containerID="e34441b2dbd38b5f28a7f4d07b19e6449122a161d9dfc853cba3b144cf86ff33" Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.657024 4853 scope.go:117] "RemoveContainer" containerID="ad140e888f74e50133625d29b0cd5503cdacd705ef6d777395b0631a3651587e" Nov 22 07:14:38 crc kubenswrapper[4853]: I1122 07:14:38.674069 4853 scope.go:117] "RemoveContainer" containerID="d526940204a98c51ae087c06d2b92d34256d7274fa7bef4bcb58d72b1ef1276a" Nov 22 07:14:39 crc kubenswrapper[4853]: I1122 07:14:39.760078 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" path="/var/lib/kubelet/pods/30cbccc7-41e5-46d2-b805-bbb03b8bb67c/volumes" Nov 22 07:14:39 crc kubenswrapper[4853]: I1122 07:14:39.761158 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" path="/var/lib/kubelet/pods/3daf1927-a46c-4be1-ace4-f62d448fb994/volumes" Nov 22 07:14:39 crc kubenswrapper[4853]: I1122 07:14:39.812661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerStarted","Data":"c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa"} Nov 22 07:14:40 crc kubenswrapper[4853]: I1122 07:14:40.838663 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4z6bc" podStartSLOduration=7.830486266 podStartE2EDuration="1m19.838640587s" podCreationTimestamp="2025-11-22 07:13:21 +0000 UTC" firstStartedPulling="2025-11-22 07:13:26.587693448 +0000 UTC m=+205.428316074" lastFinishedPulling="2025-11-22 07:14:38.595847769 +0000 UTC m=+277.436470395" observedRunningTime="2025-11-22 07:14:40.837488674 +0000 UTC m=+279.678111310" watchObservedRunningTime="2025-11-22 07:14:40.838640587 +0000 UTC m=+279.679263213" Nov 22 07:14:42 crc kubenswrapper[4853]: I1122 07:14:42.305419 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:14:42 crc kubenswrapper[4853]: I1122 07:14:42.305512 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:14:42 crc kubenswrapper[4853]: I1122 07:14:42.359104 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:14:42 crc kubenswrapper[4853]: I1122 07:14:42.986138 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:42 crc kubenswrapper[4853]: I1122 07:14:42.986487 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:43 crc kubenswrapper[4853]: I1122 07:14:43.035715 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:43 crc kubenswrapper[4853]: I1122 07:14:43.877084 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:44 crc kubenswrapper[4853]: I1122 07:14:44.841834 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerStarted","Data":"dc512a8aad5fb5914cf9878fb00512593189a082fe453a7064f92c481efa3cf4"} Nov 22 07:14:44 crc kubenswrapper[4853]: I1122 07:14:44.844176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerStarted","Data":"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad"} Nov 22 07:14:44 crc kubenswrapper[4853]: I1122 07:14:44.848176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerStarted","Data":"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d"} Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.859457 4853 generic.go:334] "Generic (PLEG): container finished" podID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerID="d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad" exitCode=0 Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.859642 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerDied","Data":"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad"} Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.864995 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce89388a-728c-4afc-b155-2813e35a8413" containerID="58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d" exitCode=0 Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.865120 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerDied","Data":"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d"} Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.871961 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a52e070-929c-4194-8197-d66d88780fdc" containerID="dc512a8aad5fb5914cf9878fb00512593189a082fe453a7064f92c481efa3cf4" exitCode=0 Nov 22 07:14:45 crc kubenswrapper[4853]: I1122 07:14:45.872143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerDied","Data":"dc512a8aad5fb5914cf9878fb00512593189a082fe453a7064f92c481efa3cf4"} Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.388476 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.880480 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerStarted","Data":"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e"} Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.882991 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerStarted","Data":"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4"} Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.885384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerStarted","Data":"18371a6de09757d68b80b447ba86dd227d3ad3629d978422058909d68b730b0c"} Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.885635 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wfjvm" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="registry-server" containerID="cri-o://06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39" gracePeriod=2 Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.935308 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fx9sl" podStartSLOduration=4.184635547 podStartE2EDuration="1m23.935283857s" podCreationTimestamp="2025-11-22 07:13:23 +0000 UTC" firstStartedPulling="2025-11-22 07:13:26.586136576 +0000 UTC m=+205.426759202" lastFinishedPulling="2025-11-22 07:14:46.336784866 +0000 UTC m=+285.177407512" observedRunningTime="2025-11-22 07:14:46.93431514 +0000 UTC m=+285.774937766" watchObservedRunningTime="2025-11-22 07:14:46.935283857 +0000 UTC m=+285.775906493" Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.936114 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sk8bz" podStartSLOduration=6.148598875 podStartE2EDuration="1m27.936104881s" podCreationTimestamp="2025-11-22 07:13:19 +0000 UTC" firstStartedPulling="2025-11-22 07:13:24.519321499 +0000 UTC m=+203.359944125" lastFinishedPulling="2025-11-22 07:14:46.306827505 +0000 UTC m=+285.147450131" observedRunningTime="2025-11-22 07:14:46.911973403 +0000 UTC m=+285.752596049" watchObservedRunningTime="2025-11-22 07:14:46.936104881 +0000 UTC m=+285.776727517" Nov 22 07:14:46 crc kubenswrapper[4853]: I1122 07:14:46.961461 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-klwzw" podStartSLOduration=4.155980563 podStartE2EDuration="1m23.961437411s" podCreationTimestamp="2025-11-22 07:13:23 +0000 UTC" firstStartedPulling="2025-11-22 07:13:26.586790413 +0000 UTC m=+205.427413039" lastFinishedPulling="2025-11-22 07:14:46.392247261 +0000 UTC m=+285.232869887" observedRunningTime="2025-11-22 07:14:46.957897011 +0000 UTC m=+285.798519637" watchObservedRunningTime="2025-11-22 07:14:46.961437411 +0000 UTC m=+285.802060037" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.286640 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.394851 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content\") pod \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.394942 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghkhc\" (UniqueName: \"kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc\") pod \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.395035 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities\") pod \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\" (UID: \"34bd417d-67dc-4eb8-be82-c0e268ae3cd6\") " Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.395902 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities" (OuterVolumeSpecName: "utilities") pod "34bd417d-67dc-4eb8-be82-c0e268ae3cd6" (UID: "34bd417d-67dc-4eb8-be82-c0e268ae3cd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.396332 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.401595 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc" (OuterVolumeSpecName: "kube-api-access-ghkhc") pod "34bd417d-67dc-4eb8-be82-c0e268ae3cd6" (UID: "34bd417d-67dc-4eb8-be82-c0e268ae3cd6"). InnerVolumeSpecName "kube-api-access-ghkhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.414119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34bd417d-67dc-4eb8-be82-c0e268ae3cd6" (UID: "34bd417d-67dc-4eb8-be82-c0e268ae3cd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.497840 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.497895 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghkhc\" (UniqueName: \"kubernetes.io/projected/34bd417d-67dc-4eb8-be82-c0e268ae3cd6-kube-api-access-ghkhc\") on node \"crc\" DevicePath \"\"" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.894902 4853 generic.go:334] "Generic (PLEG): container finished" podID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerID="06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39" exitCode=0 Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.894961 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerDied","Data":"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39"} Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.894997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wfjvm" event={"ID":"34bd417d-67dc-4eb8-be82-c0e268ae3cd6","Type":"ContainerDied","Data":"b3878bfbc583bc65a94e11d667aefca3994c3d5d7d02004bebd92ea20c7a101e"} Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.895019 4853 scope.go:117] "RemoveContainer" containerID="06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.895063 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wfjvm" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.908846 4853 scope.go:117] "RemoveContainer" containerID="57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.918018 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.921291 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wfjvm"] Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.943961 4853 scope.go:117] "RemoveContainer" containerID="10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.957839 4853 scope.go:117] "RemoveContainer" containerID="06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39" Nov 22 07:14:47 crc kubenswrapper[4853]: E1122 07:14:47.958255 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39\": container with ID starting with 06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39 not found: ID does not exist" containerID="06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.958293 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39"} err="failed to get container status \"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39\": rpc error: code = NotFound desc = could not find container \"06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39\": container with ID starting with 06e54fa25ae91cfbff78e5b0242d8cec1e9cc57c4a0246ed071cbbc1f618ab39 not found: ID does not exist" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.958322 4853 scope.go:117] "RemoveContainer" containerID="57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf" Nov 22 07:14:47 crc kubenswrapper[4853]: E1122 07:14:47.958524 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf\": container with ID starting with 57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf not found: ID does not exist" containerID="57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.958554 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf"} err="failed to get container status \"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf\": rpc error: code = NotFound desc = could not find container \"57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf\": container with ID starting with 57910b0b75c7ed961ac9b8a0d93c1b2693b48bc83d4212c4005c58a6931974bf not found: ID does not exist" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.958571 4853 scope.go:117] "RemoveContainer" containerID="10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44" Nov 22 07:14:47 crc kubenswrapper[4853]: E1122 07:14:47.958828 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44\": container with ID starting with 10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44 not found: ID does not exist" containerID="10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44" Nov 22 07:14:47 crc kubenswrapper[4853]: I1122 07:14:47.958860 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44"} err="failed to get container status \"10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44\": rpc error: code = NotFound desc = could not find container \"10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44\": container with ID starting with 10f057f275d8e2255c9c3068a45432bfd235d56044aad3ea1d13cb9df7dd7f44 not found: ID does not exist" Nov 22 07:14:49 crc kubenswrapper[4853]: I1122 07:14:49.756556 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" path="/var/lib/kubelet/pods/34bd417d-67dc-4eb8-be82-c0e268ae3cd6/volumes" Nov 22 07:14:50 crc kubenswrapper[4853]: I1122 07:14:50.489691 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:14:50 crc kubenswrapper[4853]: I1122 07:14:50.489785 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:14:50 crc kubenswrapper[4853]: I1122 07:14:50.550556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:14:52 crc kubenswrapper[4853]: I1122 07:14:52.347498 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:14:53 crc kubenswrapper[4853]: I1122 07:14:53.689228 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:14:53 crc kubenswrapper[4853]: I1122 07:14:53.689301 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:14:53 crc kubenswrapper[4853]: I1122 07:14:53.735313 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:14:53 crc kubenswrapper[4853]: I1122 07:14:53.986422 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:14:54 crc kubenswrapper[4853]: I1122 07:14:54.126269 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:14:54 crc kubenswrapper[4853]: I1122 07:14:54.126348 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:14:54 crc kubenswrapper[4853]: I1122 07:14:54.168154 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:14:54 crc kubenswrapper[4853]: I1122 07:14:54.991607 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:14:55 crc kubenswrapper[4853]: I1122 07:14:55.557672 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:14:56 crc kubenswrapper[4853]: I1122 07:14:56.389921 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:14:56 crc kubenswrapper[4853]: I1122 07:14:56.951730 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fx9sl" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="registry-server" containerID="cri-o://18371a6de09757d68b80b447ba86dd227d3ad3629d978422058909d68b730b0c" gracePeriod=2 Nov 22 07:14:59 crc kubenswrapper[4853]: I1122 07:14:59.970258 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a52e070-929c-4194-8197-d66d88780fdc" containerID="18371a6de09757d68b80b447ba86dd227d3ad3629d978422058909d68b730b0c" exitCode=0 Nov 22 07:14:59 crc kubenswrapper[4853]: I1122 07:14:59.970338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerDied","Data":"18371a6de09757d68b80b447ba86dd227d3ad3629d978422058909d68b730b0c"} Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.148040 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh"] Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.148829 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.148944 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149033 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.149104 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149193 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.149279 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149368 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.149451 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149579 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.149668 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149777 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.149870 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.149955 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150055 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.150143 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a423d848-68b1-49f1-af43-17d8f79c9562" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150230 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a423d848-68b1-49f1-af43-17d8f79c9562" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.150333 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150425 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="extract-content" Nov 22 07:15:00 crc kubenswrapper[4853]: E1122 07:15:00.150509 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150568 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="extract-utilities" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150781 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a423d848-68b1-49f1-af43-17d8f79c9562" containerName="pruner" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150867 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="34bd417d-67dc-4eb8-be82-c0e268ae3cd6" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150928 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3daf1927-a46c-4be1-ace4-f62d448fb994" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.150989 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="30cbccc7-41e5-46d2-b805-bbb03b8bb67c" containerName="registry-server" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.151521 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.154870 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.156656 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.159442 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh"] Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.292440 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.292545 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.292593 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgs8\" (UniqueName: \"kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.393457 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgs8\" (UniqueName: \"kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.393582 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.393656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.394844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.403886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.412002 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgs8\" (UniqueName: \"kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8\") pod \"collect-profiles-29396595-r8svh\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.475912 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.537882 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.701700 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh"] Nov 22 07:15:00 crc kubenswrapper[4853]: I1122 07:15:00.982076 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" event={"ID":"8d65d22f-c53e-4a25-9571-3bbb65e04d66","Type":"ContainerStarted","Data":"8886c3bc5e8b0fd6b2ae1bd675850aeb514db47b2c6cdffe2923697a33f6a9df"} Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.710902 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.818324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content\") pod \"5a52e070-929c-4194-8197-d66d88780fdc\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.818401 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities\") pod \"5a52e070-929c-4194-8197-d66d88780fdc\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.818554 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wfq\" (UniqueName: \"kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq\") pod \"5a52e070-929c-4194-8197-d66d88780fdc\" (UID: \"5a52e070-929c-4194-8197-d66d88780fdc\") " Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.819349 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities" (OuterVolumeSpecName: "utilities") pod "5a52e070-929c-4194-8197-d66d88780fdc" (UID: "5a52e070-929c-4194-8197-d66d88780fdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.819892 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.826584 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq" (OuterVolumeSpecName: "kube-api-access-k4wfq") pod "5a52e070-929c-4194-8197-d66d88780fdc" (UID: "5a52e070-929c-4194-8197-d66d88780fdc"). InnerVolumeSpecName "kube-api-access-k4wfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.921998 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4wfq\" (UniqueName: \"kubernetes.io/projected/5a52e070-929c-4194-8197-d66d88780fdc-kube-api-access-k4wfq\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.993782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fx9sl" event={"ID":"5a52e070-929c-4194-8197-d66d88780fdc","Type":"ContainerDied","Data":"3c1c5c2ba629e0e64d79fc45329296bd18baabec417372171a8475ed5d8c8ab1"} Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.993865 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fx9sl" Nov 22 07:15:01 crc kubenswrapper[4853]: I1122 07:15:01.993878 4853 scope.go:117] "RemoveContainer" containerID="18371a6de09757d68b80b447ba86dd227d3ad3629d978422058909d68b730b0c" Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.012617 4853 scope.go:117] "RemoveContainer" containerID="dc512a8aad5fb5914cf9878fb00512593189a082fe453a7064f92c481efa3cf4" Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.026557 4853 scope.go:117] "RemoveContainer" containerID="86f478c6b96d883cc60f3dd4918f0e4c53aea142d09c77cea7199c187383fc87" Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.377182 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a52e070-929c-4194-8197-d66d88780fdc" (UID: "5a52e070-929c-4194-8197-d66d88780fdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.428927 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a52e070-929c-4194-8197-d66d88780fdc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.624795 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:15:02 crc kubenswrapper[4853]: I1122 07:15:02.628395 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fx9sl"] Nov 22 07:15:03 crc kubenswrapper[4853]: I1122 07:15:03.002148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" event={"ID":"8d65d22f-c53e-4a25-9571-3bbb65e04d66","Type":"ContainerStarted","Data":"88e958cfcf7fe586bd93929691c7dd38d777f4d5878426723e44157c988b40e7"} Nov 22 07:15:03 crc kubenswrapper[4853]: I1122 07:15:03.837526 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a52e070-929c-4194-8197-d66d88780fdc" path="/var/lib/kubelet/pods/5a52e070-929c-4194-8197-d66d88780fdc/volumes" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.030288 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" podStartSLOduration=5.030262796 podStartE2EDuration="5.030262796s" podCreationTimestamp="2025-11-22 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:05.028144346 +0000 UTC m=+303.868766992" watchObservedRunningTime="2025-11-22 07:15:05.030262796 +0000 UTC m=+303.870885422" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.072312 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.072459 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.072540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.072590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.084623 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.084804 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.084929 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.095158 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.095298 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.101925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.106497 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.108625 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.265596 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.272110 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:15:05 crc kubenswrapper[4853]: I1122 07:15:05.279499 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:15:06 crc kubenswrapper[4853]: I1122 07:15:06.019306 4853 generic.go:334] "Generic (PLEG): container finished" podID="8d65d22f-c53e-4a25-9571-3bbb65e04d66" containerID="88e958cfcf7fe586bd93929691c7dd38d777f4d5878426723e44157c988b40e7" exitCode=0 Nov 22 07:15:06 crc kubenswrapper[4853]: I1122 07:15:06.019377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" event={"ID":"8d65d22f-c53e-4a25-9571-3bbb65e04d66","Type":"ContainerDied","Data":"88e958cfcf7fe586bd93929691c7dd38d777f4d5878426723e44157c988b40e7"} Nov 22 07:15:06 crc kubenswrapper[4853]: W1122 07:15:06.159692 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-5002e833e54efdeecd9dfe9b6ed5098f6ba06960635ab981f46661868daa19a5 WatchSource:0}: Error finding container 5002e833e54efdeecd9dfe9b6ed5098f6ba06960635ab981f46661868daa19a5: Status 404 returned error can't find the container with id 5002e833e54efdeecd9dfe9b6ed5098f6ba06960635ab981f46661868daa19a5 Nov 22 07:15:06 crc kubenswrapper[4853]: W1122 07:15:06.161551 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1eb61d2a73367225fa4250b9b470410000e33993dcd00689f8148c8be60fd77b WatchSource:0}: Error finding container 1eb61d2a73367225fa4250b9b470410000e33993dcd00689f8148c8be60fd77b: Status 404 returned error can't find the container with id 1eb61d2a73367225fa4250b9b470410000e33993dcd00689f8148c8be60fd77b Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.027665 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5002e833e54efdeecd9dfe9b6ed5098f6ba06960635ab981f46661868daa19a5"} Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.031053 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f8610fc87f62517be02b305aecd682adf7f89ce5dde575017b6b254ad20e6ff0"} Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.032845 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1eb61d2a73367225fa4250b9b470410000e33993dcd00689f8148c8be60fd77b"} Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.301971 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.307089 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume\") pod \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.307185 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgs8\" (UniqueName: \"kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8\") pod \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.307241 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume\") pod \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\" (UID: \"8d65d22f-c53e-4a25-9571-3bbb65e04d66\") " Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.310263 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d65d22f-c53e-4a25-9571-3bbb65e04d66" (UID: "8d65d22f-c53e-4a25-9571-3bbb65e04d66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.316191 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8" (OuterVolumeSpecName: "kube-api-access-8fgs8") pod "8d65d22f-c53e-4a25-9571-3bbb65e04d66" (UID: "8d65d22f-c53e-4a25-9571-3bbb65e04d66"). InnerVolumeSpecName "kube-api-access-8fgs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.317347 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d65d22f-c53e-4a25-9571-3bbb65e04d66" (UID: "8d65d22f-c53e-4a25-9571-3bbb65e04d66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.408513 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d65d22f-c53e-4a25-9571-3bbb65e04d66-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.408565 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgs8\" (UniqueName: \"kubernetes.io/projected/8d65d22f-c53e-4a25-9571-3bbb65e04d66-kube-api-access-8fgs8\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:07 crc kubenswrapper[4853]: I1122 07:15:07.408576 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d65d22f-c53e-4a25-9571-3bbb65e04d66-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.048342 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c9213dfe4ba371c8108d64f4710ce9650b7bfb9bcc4f09da36bd43bb9b2d8245"} Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.048435 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.051550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cd889bee2f239b079a29f157c6ca9fafa8c99aef241fc1410a961d3348a83c44"} Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.053599 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cd381f506c3765852af1a382e1bacdcdbe9e919971237450f6a48f929f5216a3"} Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.055544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" event={"ID":"8d65d22f-c53e-4a25-9571-3bbb65e04d66","Type":"ContainerDied","Data":"8886c3bc5e8b0fd6b2ae1bd675850aeb514db47b2c6cdffe2923697a33f6a9df"} Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.055606 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8886c3bc5e8b0fd6b2ae1bd675850aeb514db47b2c6cdffe2923697a33f6a9df" Nov 22 07:15:08 crc kubenswrapper[4853]: I1122 07:15:08.055601 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh" Nov 22 07:15:20 crc kubenswrapper[4853]: I1122 07:15:20.589583 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" containerID="cri-o://2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994" gracePeriod=15 Nov 22 07:15:20 crc kubenswrapper[4853]: I1122 07:15:20.983864 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.010190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.010593 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.012323 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.012825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.012897 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.012975 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013065 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013144 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013224 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013292 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013323 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013410 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qws8t\" (UniqueName: \"kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login\") pod \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\" (UID: \"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7\") " Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.013907 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.017070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.018712 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.018948 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.022273 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.027357 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.027807 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.034211 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.035567 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.036694 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.037457 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.038251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.041836 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-546468998b-lshhx"] Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.042541 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="extract-utilities" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.042573 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="extract-utilities" Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.042614 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="registry-server" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.042629 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="registry-server" Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.042661 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d65d22f-c53e-4a25-9571-3bbb65e04d66" containerName="collect-profiles" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.042680 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d65d22f-c53e-4a25-9571-3bbb65e04d66" containerName="collect-profiles" Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.042706 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.042718 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.042790 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="extract-content" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.042804 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="extract-content" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.043152 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a52e070-929c-4194-8197-d66d88780fdc" containerName="registry-server" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.043216 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d65d22f-c53e-4a25-9571-3bbb65e04d66" containerName="collect-profiles" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.043243 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerName="oauth-openshift" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.044538 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.051469 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.051288 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t" (OuterVolumeSpecName: "kube-api-access-qws8t") pod "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" (UID: "b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7"). InnerVolumeSpecName "kube-api-access-qws8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.062666 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-546468998b-lshhx"] Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.114785 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.114833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c890bc3-aeca-405a-b781-0e4204091c64-audit-dir\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.114853 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-service-ca\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.114875 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-error\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.114897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-audit-policies\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115148 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-router-certs\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb4n\" (UniqueName: \"kubernetes.io/projected/6c890bc3-aeca-405a-b781-0e4204091c64-kube-api-access-lrb4n\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115206 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115228 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115251 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115341 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115629 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-login\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115814 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-session\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.115890 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116713 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116800 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116836 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116904 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116938 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.116973 4853 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117001 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117029 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117058 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117086 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117115 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qws8t\" (UniqueName: \"kubernetes.io/projected/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-kube-api-access-qws8t\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117142 4853 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117168 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.117197 4853 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.130889 4853 generic.go:334] "Generic (PLEG): container finished" podID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" containerID="2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994" exitCode=0 Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.130941 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" event={"ID":"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7","Type":"ContainerDied","Data":"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994"} Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.130974 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" event={"ID":"b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7","Type":"ContainerDied","Data":"0904dec92d1299cc3aef2e71988c4159b63191d58c04c0a4c4636177c6081a86"} Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.130993 4853 scope.go:117] "RemoveContainer" containerID="2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.131145 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-9qfvq" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.153975 4853 scope.go:117] "RemoveContainer" containerID="2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994" Nov 22 07:15:21 crc kubenswrapper[4853]: E1122 07:15:21.155334 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994\": container with ID starting with 2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994 not found: ID does not exist" containerID="2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.155385 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994"} err="failed to get container status \"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994\": rpc error: code = NotFound desc = could not find container \"2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994\": container with ID starting with 2b15c550b629a8fcf44d4c427eb867445e535e0b1a7931aab022d30e1cc55994 not found: ID does not exist" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.168311 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.171235 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-9qfvq"] Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c890bc3-aeca-405a-b781-0e4204091c64-audit-dir\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-service-ca\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218867 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-error\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218901 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-audit-policies\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218937 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-router-certs\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.218961 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrb4n\" (UniqueName: \"kubernetes.io/projected/6c890bc3-aeca-405a-b781-0e4204091c64-kube-api-access-lrb4n\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.219003 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6c890bc3-aeca-405a-b781-0e4204091c64-audit-dir\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.219019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.219934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.220048 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.220184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.220339 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-login\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.220346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-audit-policies\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.220655 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.221094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-service-ca\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.221198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-session\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.221379 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.222007 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-cliconfig\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.223908 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-error\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.224531 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.224887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-router-certs\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.225263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-serving-cert\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.225484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.225581 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-system-session\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.226239 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.226393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6c890bc3-aeca-405a-b781-0e4204091c64-v4-0-config-user-template-login\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.240731 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrb4n\" (UniqueName: \"kubernetes.io/projected/6c890bc3-aeca-405a-b781-0e4204091c64-kube-api-access-lrb4n\") pod \"oauth-openshift-546468998b-lshhx\" (UID: \"6c890bc3-aeca-405a-b781-0e4204091c64\") " pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.395934 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.600145 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-546468998b-lshhx"] Nov 22 07:15:21 crc kubenswrapper[4853]: W1122 07:15:21.613460 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c890bc3_aeca_405a_b781_0e4204091c64.slice/crio-63d2ef6a949d5bba83e5e5484234d127399561ba6ffd18a69ca254ab97429443 WatchSource:0}: Error finding container 63d2ef6a949d5bba83e5e5484234d127399561ba6ffd18a69ca254ab97429443: Status 404 returned error can't find the container with id 63d2ef6a949d5bba83e5e5484234d127399561ba6ffd18a69ca254ab97429443 Nov 22 07:15:21 crc kubenswrapper[4853]: I1122 07:15:21.759335 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7" path="/var/lib/kubelet/pods/b512d3e0-dee2-48d3-87e3-b05fb2cf8ed7/volumes" Nov 22 07:15:22 crc kubenswrapper[4853]: I1122 07:15:22.139908 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" event={"ID":"6c890bc3-aeca-405a-b781-0e4204091c64","Type":"ContainerStarted","Data":"bec21308946db0dc4cbdca5518b19c0b346f40711bd244b0bfebc920f21ca928"} Nov 22 07:15:22 crc kubenswrapper[4853]: I1122 07:15:22.139997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" event={"ID":"6c890bc3-aeca-405a-b781-0e4204091c64","Type":"ContainerStarted","Data":"63d2ef6a949d5bba83e5e5484234d127399561ba6ffd18a69ca254ab97429443"} Nov 22 07:15:22 crc kubenswrapper[4853]: I1122 07:15:22.140029 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:22 crc kubenswrapper[4853]: I1122 07:15:22.167129 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" podStartSLOduration=27.167067087 podStartE2EDuration="27.167067087s" podCreationTimestamp="2025-11-22 07:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:22.160263796 +0000 UTC m=+321.000886422" watchObservedRunningTime="2025-11-22 07:15:22.167067087 +0000 UTC m=+321.007689713" Nov 22 07:15:22 crc kubenswrapper[4853]: I1122 07:15:22.320861 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-546468998b-lshhx" Nov 22 07:15:45 crc kubenswrapper[4853]: I1122 07:15:45.273433 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:16:31 crc kubenswrapper[4853]: I1122 07:16:31.297979 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:16:31 crc kubenswrapper[4853]: I1122 07:16:31.299880 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.180793 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.181965 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b4zvh" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="registry-server" containerID="cri-o://b9576d80a7d84ec9df7764b6890e954222207aca2744438d6198a0e34e8e2631" gracePeriod=30 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.187643 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.187977 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sk8bz" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="registry-server" containerID="cri-o://642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e" gracePeriod=30 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.199747 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.200980 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" containerID="cri-o://fc14df829aaa8de5d98e277ca0b0264dd4fab417c2ddd11c50ac00d38543b964" gracePeriod=30 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.216928 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.217310 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4z6bc" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="registry-server" containerID="cri-o://c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" gracePeriod=30 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.221613 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.221849 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-klwzw" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="registry-server" containerID="cri-o://469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4" gracePeriod=30 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.231305 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nr2sr"] Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.234223 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.258347 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nr2sr"] Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.305265 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa is running failed: container process not found" containerID="c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.305622 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa is running failed: container process not found" containerID="c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.306003 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa is running failed: container process not found" containerID="c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.306043 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4z6bc" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="registry-server" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.430187 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql9lg\" (UniqueName: \"kubernetes.io/projected/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-kube-api-access-ql9lg\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.430262 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.430296 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.531803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql9lg\" (UniqueName: \"kubernetes.io/projected/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-kube-api-access-ql9lg\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.531889 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.531914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.533899 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.543997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.566719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql9lg\" (UniqueName: \"kubernetes.io/projected/c54d72ed-4fd1-4c17-a3ac-ba1e743e2307-kube-api-access-ql9lg\") pod \"marketplace-operator-79b997595-nr2sr\" (UID: \"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307\") " pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.695799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.700865 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.759844 4853 generic.go:334] "Generic (PLEG): container finished" podID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerID="b9576d80a7d84ec9df7764b6890e954222207aca2744438d6198a0e34e8e2631" exitCode=0 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.759937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerDied","Data":"b9576d80a7d84ec9df7764b6890e954222207aca2744438d6198a0e34e8e2631"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.762357 4853 generic.go:334] "Generic (PLEG): container finished" podID="ce89388a-728c-4afc-b155-2813e35a8413" containerID="469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4" exitCode=0 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.762433 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerDied","Data":"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.762513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-klwzw" event={"ID":"ce89388a-728c-4afc-b155-2813e35a8413","Type":"ContainerDied","Data":"4c8562832d4c146788aa6775494a35f49c77b82513395c18a74bbdc09c23bc59"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.762542 4853 scope.go:117] "RemoveContainer" containerID="469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.762704 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-klwzw" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.765333 4853 generic.go:334] "Generic (PLEG): container finished" podID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerID="c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" exitCode=0 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.765410 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerDied","Data":"c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.767262 4853 generic.go:334] "Generic (PLEG): container finished" podID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerID="fc14df829aaa8de5d98e277ca0b0264dd4fab417c2ddd11c50ac00d38543b964" exitCode=0 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.767323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" event={"ID":"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc","Type":"ContainerDied","Data":"fc14df829aaa8de5d98e277ca0b0264dd4fab417c2ddd11c50ac00d38543b964"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.769575 4853 generic.go:334] "Generic (PLEG): container finished" podID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerID="642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e" exitCode=0 Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.769607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerDied","Data":"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.769625 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sk8bz" event={"ID":"a81b49b7-c4a0-4397-8524-ffaa67583496","Type":"ContainerDied","Data":"7e44e6ed2b4f392aa935319d3fba61fff428961451a3d7acea225d90811a6372"} Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.769707 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sk8bz" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.782740 4853 scope.go:117] "RemoveContainer" containerID="58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.800530 4853 scope.go:117] "RemoveContainer" containerID="de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.816306 4853 scope.go:117] "RemoveContainer" containerID="469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.816916 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4\": container with ID starting with 469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4 not found: ID does not exist" containerID="469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.816982 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4"} err="failed to get container status \"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4\": rpc error: code = NotFound desc = could not find container \"469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4\": container with ID starting with 469cac3e136d62abd03882b740c6d4f2e2b473348383e12b7ac63c2694e31af4 not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.817018 4853 scope.go:117] "RemoveContainer" containerID="58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.817643 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d\": container with ID starting with 58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d not found: ID does not exist" containerID="58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.817669 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d"} err="failed to get container status \"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d\": rpc error: code = NotFound desc = could not find container \"58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d\": container with ID starting with 58b4e816eef2c49610060f8c5bf6ce020eb4acbd37268ae731a60140aa67be3d not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.817681 4853 scope.go:117] "RemoveContainer" containerID="de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.818188 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c\": container with ID starting with de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c not found: ID does not exist" containerID="de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.818246 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c"} err="failed to get container status \"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c\": rpc error: code = NotFound desc = could not find container \"de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c\": container with ID starting with de4f59ed20c8ccc82595c5ca26cc3987cdaeb6fcd8b9c31f8fc64319d3fafc6c not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.818273 4853 scope.go:117] "RemoveContainer" containerID="642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.832334 4853 scope.go:117] "RemoveContainer" containerID="d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.837025 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjggf\" (UniqueName: \"kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf\") pod \"ce89388a-728c-4afc-b155-2813e35a8413\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.837151 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content\") pod \"ce89388a-728c-4afc-b155-2813e35a8413\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.838427 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities" (OuterVolumeSpecName: "utilities") pod "ce89388a-728c-4afc-b155-2813e35a8413" (UID: "ce89388a-728c-4afc-b155-2813e35a8413"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.837271 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities\") pod \"ce89388a-728c-4afc-b155-2813e35a8413\" (UID: \"ce89388a-728c-4afc-b155-2813e35a8413\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.839348 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content\") pod \"a81b49b7-c4a0-4397-8524-ffaa67583496\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.842383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf" (OuterVolumeSpecName: "kube-api-access-cjggf") pod "ce89388a-728c-4afc-b155-2813e35a8413" (UID: "ce89388a-728c-4afc-b155-2813e35a8413"). InnerVolumeSpecName "kube-api-access-cjggf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.843065 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities\") pod \"a81b49b7-c4a0-4397-8524-ffaa67583496\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.843194 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hctnl\" (UniqueName: \"kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl\") pod \"a81b49b7-c4a0-4397-8524-ffaa67583496\" (UID: \"a81b49b7-c4a0-4397-8524-ffaa67583496\") " Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.844110 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities" (OuterVolumeSpecName: "utilities") pod "a81b49b7-c4a0-4397-8524-ffaa67583496" (UID: "a81b49b7-c4a0-4397-8524-ffaa67583496"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.845472 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.845506 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.845520 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjggf\" (UniqueName: \"kubernetes.io/projected/ce89388a-728c-4afc-b155-2813e35a8413-kube-api-access-cjggf\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.846965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl" (OuterVolumeSpecName: "kube-api-access-hctnl") pod "a81b49b7-c4a0-4397-8524-ffaa67583496" (UID: "a81b49b7-c4a0-4397-8524-ffaa67583496"). InnerVolumeSpecName "kube-api-access-hctnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.854642 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.858149 4853 scope.go:117] "RemoveContainer" containerID="90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.877142 4853 scope.go:117] "RemoveContainer" containerID="642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.877952 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e\": container with ID starting with 642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e not found: ID does not exist" containerID="642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.877990 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e"} err="failed to get container status \"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e\": rpc error: code = NotFound desc = could not find container \"642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e\": container with ID starting with 642c66c685f84d555e8fc63dd88ced89d5b9c418a9ca00fcc80ccf4a12a6f77e not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.878015 4853 scope.go:117] "RemoveContainer" containerID="d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.878408 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad\": container with ID starting with d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad not found: ID does not exist" containerID="d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.878459 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad"} err="failed to get container status \"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad\": rpc error: code = NotFound desc = could not find container \"d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad\": container with ID starting with d2608771e3196312e6a1a9a580cb736a5c38a807e55380253c7c5f97eb69d6ad not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.878496 4853 scope.go:117] "RemoveContainer" containerID="90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c" Nov 22 07:16:52 crc kubenswrapper[4853]: E1122 07:16:52.878926 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c\": container with ID starting with 90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c not found: ID does not exist" containerID="90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.878956 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c"} err="failed to get container status \"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c\": rpc error: code = NotFound desc = could not find container \"90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c\": container with ID starting with 90e104069b22913209c42e42a8803206e38551b680e778dbd63a83e6f2af5f4c not found: ID does not exist" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.903468 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a81b49b7-c4a0-4397-8524-ffaa67583496" (UID: "a81b49b7-c4a0-4397-8524-ffaa67583496"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.935589 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce89388a-728c-4afc-b155-2813e35a8413" (UID: "ce89388a-728c-4afc-b155-2813e35a8413"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.946520 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a81b49b7-c4a0-4397-8524-ffaa67583496-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.946564 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hctnl\" (UniqueName: \"kubernetes.io/projected/a81b49b7-c4a0-4397-8524-ffaa67583496-kube-api-access-hctnl\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:52 crc kubenswrapper[4853]: I1122 07:16:52.946579 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce89388a-728c-4afc-b155-2813e35a8413-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.097704 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.102261 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.104570 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-klwzw"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.111824 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.115204 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sk8bz"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.149827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lg2h\" (UniqueName: \"kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h\") pod \"30996d2a-faed-48ba-80d6-d86b88fd5282\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.149883 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content\") pod \"30996d2a-faed-48ba-80d6-d86b88fd5282\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.149949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities\") pod \"30996d2a-faed-48ba-80d6-d86b88fd5282\" (UID: \"30996d2a-faed-48ba-80d6-d86b88fd5282\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.151194 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities" (OuterVolumeSpecName: "utilities") pod "30996d2a-faed-48ba-80d6-d86b88fd5282" (UID: "30996d2a-faed-48ba-80d6-d86b88fd5282"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.155550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h" (OuterVolumeSpecName: "kube-api-access-4lg2h") pod "30996d2a-faed-48ba-80d6-d86b88fd5282" (UID: "30996d2a-faed-48ba-80d6-d86b88fd5282"). InnerVolumeSpecName "kube-api-access-4lg2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.202423 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30996d2a-faed-48ba-80d6-d86b88fd5282" (UID: "30996d2a-faed-48ba-80d6-d86b88fd5282"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.230984 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.245410 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.250852 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca\") pod \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.250938 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzh8s\" (UniqueName: \"kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s\") pod \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.251093 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics\") pod \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\" (UID: \"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.251358 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lg2h\" (UniqueName: \"kubernetes.io/projected/30996d2a-faed-48ba-80d6-d86b88fd5282-kube-api-access-4lg2h\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.251379 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.251390 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30996d2a-faed-48ba-80d6-d86b88fd5282-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.251830 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" (UID: "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.258397 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s" (OuterVolumeSpecName: "kube-api-access-vzh8s") pod "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" (UID: "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc"). InnerVolumeSpecName "kube-api-access-vzh8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.267453 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" (UID: "b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.306591 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nr2sr"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.354637 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities\") pod \"6240b5f2-c1bb-4478-8935-b2579e37e8af\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.354718 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content\") pod \"6240b5f2-c1bb-4478-8935-b2579e37e8af\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.354792 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfvfv\" (UniqueName: \"kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv\") pod \"6240b5f2-c1bb-4478-8935-b2579e37e8af\" (UID: \"6240b5f2-c1bb-4478-8935-b2579e37e8af\") " Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.354991 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.355004 4853 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.355013 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzh8s\" (UniqueName: \"kubernetes.io/projected/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc-kube-api-access-vzh8s\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.356496 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities" (OuterVolumeSpecName: "utilities") pod "6240b5f2-c1bb-4478-8935-b2579e37e8af" (UID: "6240b5f2-c1bb-4478-8935-b2579e37e8af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.361690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv" (OuterVolumeSpecName: "kube-api-access-jfvfv") pod "6240b5f2-c1bb-4478-8935-b2579e37e8af" (UID: "6240b5f2-c1bb-4478-8935-b2579e37e8af"). InnerVolumeSpecName "kube-api-access-jfvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.373971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6240b5f2-c1bb-4478-8935-b2579e37e8af" (UID: "6240b5f2-c1bb-4478-8935-b2579e37e8af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.456909 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.456949 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfvfv\" (UniqueName: \"kubernetes.io/projected/6240b5f2-c1bb-4478-8935-b2579e37e8af-kube-api-access-jfvfv\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.456964 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6240b5f2-c1bb-4478-8935-b2579e37e8af-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.755176 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" path="/var/lib/kubelet/pods/a81b49b7-c4a0-4397-8524-ffaa67583496/volumes" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.755961 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce89388a-728c-4afc-b155-2813e35a8413" path="/var/lib/kubelet/pods/ce89388a-728c-4afc-b155-2813e35a8413/volumes" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.787119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" event={"ID":"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307","Type":"ContainerStarted","Data":"8abfd079d4f519685c3acec365870c8b55a59129e022cd1e9af325e71ec8d5aa"} Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.787179 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" event={"ID":"c54d72ed-4fd1-4c17-a3ac-ba1e743e2307","Type":"ContainerStarted","Data":"66fd74e238031f51212e5ba06a757dbc1d1a4f94a0ffe4b35a7a43fb518c4c12"} Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.787472 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.790094 4853 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nr2sr container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.790144 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" podUID="c54d72ed-4fd1-4c17-a3ac-ba1e743e2307" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.792254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4z6bc" event={"ID":"6240b5f2-c1bb-4478-8935-b2579e37e8af","Type":"ContainerDied","Data":"36a0cef9a28378820b968eaf4f3de291f99d28bf7f1af1e70581fc0d4f092229"} Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.792318 4853 scope.go:117] "RemoveContainer" containerID="c408445515066b72c0989b35237d102fa17fd3336632715dda43b99b2990eafa" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.792474 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4z6bc" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.795891 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.795928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gwwg5" event={"ID":"b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc","Type":"ContainerDied","Data":"3f6925ea92175ec909d297d881bfa02c835a129c59b7784f5e83c1bffd0c6b12"} Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.802201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b4zvh" event={"ID":"30996d2a-faed-48ba-80d6-d86b88fd5282","Type":"ContainerDied","Data":"3f1dd7437e6a5f83eb4e7bc95ce38a31028e16cb0f72f9f3ceab5e9be0d91f94"} Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.802328 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b4zvh" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.817564 4853 scope.go:117] "RemoveContainer" containerID="fa813139d0e11e491f6420a1067ba3a563f8d898194b8c5244c98e097c4b9e5f" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.818686 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" podStartSLOduration=1.818665779 podStartE2EDuration="1.818665779s" podCreationTimestamp="2025-11-22 07:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:16:53.815423008 +0000 UTC m=+412.656045634" watchObservedRunningTime="2025-11-22 07:16:53.818665779 +0000 UTC m=+412.659288395" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.843607 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.847816 4853 scope.go:117] "RemoveContainer" containerID="023df8ad7d3b428ca68b4657b8f182d601d08dc192f24082658845e04bf5d75e" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.849674 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gwwg5"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.863595 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.888454 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b4zvh"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.893309 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.900982 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4z6bc"] Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.905669 4853 scope.go:117] "RemoveContainer" containerID="fc14df829aaa8de5d98e277ca0b0264dd4fab417c2ddd11c50ac00d38543b964" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.925362 4853 scope.go:117] "RemoveContainer" containerID="b9576d80a7d84ec9df7764b6890e954222207aca2744438d6198a0e34e8e2631" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.941782 4853 scope.go:117] "RemoveContainer" containerID="401761bdae17a38cd53e5a9cac4c052a5f6a87dde696dcc7c1fd8d39d30b6bc6" Nov 22 07:16:53 crc kubenswrapper[4853]: I1122 07:16:53.956619 4853 scope.go:117] "RemoveContainer" containerID="8d77d7e22e6011d7944d1b73d7189e71f1a9abca66043265f8500c11d25ae5ae" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.407733 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqss"] Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408060 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408079 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408088 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408095 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408107 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408115 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408126 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408138 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408151 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408159 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408170 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408177 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408186 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408192 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="extract-content" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408204 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408211 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408221 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408228 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408241 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408250 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408259 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408266 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="extract-utilities" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408279 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408288 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: E1122 07:16:54.408296 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408303 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408425 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" containerName="marketplace-operator" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408444 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408458 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81b49b7-c4a0-4397-8524-ffaa67583496" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408472 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.408482 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce89388a-728c-4afc-b155-2813e35a8413" containerName="registry-server" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.409367 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.412463 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.415286 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqss"] Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.471071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-utilities\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.471125 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-catalog-content\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.471159 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kd99\" (UniqueName: \"kubernetes.io/projected/dd697e52-9abd-4be8-a245-625d1dde804e-kube-api-access-4kd99\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.571991 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-utilities\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.572041 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-catalog-content\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.572073 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kd99\" (UniqueName: \"kubernetes.io/projected/dd697e52-9abd-4be8-a245-625d1dde804e-kube-api-access-4kd99\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.572869 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-utilities\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.573106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd697e52-9abd-4be8-a245-625d1dde804e-catalog-content\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.593321 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kd99\" (UniqueName: \"kubernetes.io/projected/dd697e52-9abd-4be8-a245-625d1dde804e-kube-api-access-4kd99\") pod \"redhat-marketplace-4mqss\" (UID: \"dd697e52-9abd-4be8-a245-625d1dde804e\") " pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.602096 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m28tt"] Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.603353 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.605206 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.613289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m28tt"] Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.672990 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg46l\" (UniqueName: \"kubernetes.io/projected/c1265d82-d3bb-4d83-bb9e-05cbb5960004-kube-api-access-dg46l\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.673072 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-utilities\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.673284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-catalog-content\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.730884 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.774677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-catalog-content\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.775116 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg46l\" (UniqueName: \"kubernetes.io/projected/c1265d82-d3bb-4d83-bb9e-05cbb5960004-kube-api-access-dg46l\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.775188 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-utilities\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.776358 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-catalog-content\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.777428 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1265d82-d3bb-4d83-bb9e-05cbb5960004-utilities\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.798923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg46l\" (UniqueName: \"kubernetes.io/projected/c1265d82-d3bb-4d83-bb9e-05cbb5960004-kube-api-access-dg46l\") pod \"redhat-operators-m28tt\" (UID: \"c1265d82-d3bb-4d83-bb9e-05cbb5960004\") " pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.832750 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nr2sr" Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.943932 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4mqss"] Nov 22 07:16:54 crc kubenswrapper[4853]: I1122 07:16:54.971203 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.163579 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m28tt"] Nov 22 07:16:55 crc kubenswrapper[4853]: W1122 07:16:55.195557 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1265d82_d3bb_4d83_bb9e_05cbb5960004.slice/crio-7597621dc4bc04efd56285ce5da8cd2d81d949f1132be1778d0c953bb32be1ad WatchSource:0}: Error finding container 7597621dc4bc04efd56285ce5da8cd2d81d949f1132be1778d0c953bb32be1ad: Status 404 returned error can't find the container with id 7597621dc4bc04efd56285ce5da8cd2d81d949f1132be1778d0c953bb32be1ad Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.757372 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30996d2a-faed-48ba-80d6-d86b88fd5282" path="/var/lib/kubelet/pods/30996d2a-faed-48ba-80d6-d86b88fd5282/volumes" Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.758340 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6240b5f2-c1bb-4478-8935-b2579e37e8af" path="/var/lib/kubelet/pods/6240b5f2-c1bb-4478-8935-b2579e37e8af/volumes" Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.760359 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc" path="/var/lib/kubelet/pods/b8aed1ad-ec7d-4e5d-b60a-b8d7bee2a0cc/volumes" Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.830287 4853 generic.go:334] "Generic (PLEG): container finished" podID="dd697e52-9abd-4be8-a245-625d1dde804e" containerID="17aa7968e495440da89a5c2644b0d105d78b10f86c8d77da4c452fb8b4f51ceb" exitCode=0 Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.830386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqss" event={"ID":"dd697e52-9abd-4be8-a245-625d1dde804e","Type":"ContainerDied","Data":"17aa7968e495440da89a5c2644b0d105d78b10f86c8d77da4c452fb8b4f51ceb"} Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.830431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqss" event={"ID":"dd697e52-9abd-4be8-a245-625d1dde804e","Type":"ContainerStarted","Data":"e6367d7dd794468180c20e8e35b0746c79c23bf6d127bd568a26fca671c4b607"} Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.831955 4853 generic.go:334] "Generic (PLEG): container finished" podID="c1265d82-d3bb-4d83-bb9e-05cbb5960004" containerID="8d21c1cd44b97564031230a93b06af564112217c67b721e6eaf47fe6ccc26f08" exitCode=0 Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.833081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m28tt" event={"ID":"c1265d82-d3bb-4d83-bb9e-05cbb5960004","Type":"ContainerDied","Data":"8d21c1cd44b97564031230a93b06af564112217c67b721e6eaf47fe6ccc26f08"} Nov 22 07:16:55 crc kubenswrapper[4853]: I1122 07:16:55.833107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m28tt" event={"ID":"c1265d82-d3bb-4d83-bb9e-05cbb5960004","Type":"ContainerStarted","Data":"7597621dc4bc04efd56285ce5da8cd2d81d949f1132be1778d0c953bb32be1ad"} Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.800218 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.801907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.805778 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.814902 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.840004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m28tt" event={"ID":"c1265d82-d3bb-4d83-bb9e-05cbb5960004","Type":"ContainerStarted","Data":"c25362cf679d3947aff366474f5394b57aa109c36bc3ce946481fe1c5c282a86"} Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.857541 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqss" event={"ID":"dd697e52-9abd-4be8-a245-625d1dde804e","Type":"ContainerStarted","Data":"cc0774378b8b594a8790f267c01c78fbd1bfd23b96e518b6d9c3774977ccb8ee"} Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.900960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.901056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nv2v\" (UniqueName: \"kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.901083 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:56 crc kubenswrapper[4853]: I1122 07:16:56.998846 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8dxcs"] Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.000473 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.002055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nv2v\" (UniqueName: \"kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.002162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.002266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.003417 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.003729 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.003922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.016738 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dxcs"] Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.038901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nv2v\" (UniqueName: \"kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v\") pod \"community-operators-tdfrh\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.103968 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-utilities\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.104410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-catalog-content\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.104455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc75x\" (UniqueName: \"kubernetes.io/projected/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-kube-api-access-vc75x\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.205685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-utilities\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.205781 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-catalog-content\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.205827 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc75x\" (UniqueName: \"kubernetes.io/projected/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-kube-api-access-vc75x\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.206268 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-utilities\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.206537 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-catalog-content\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.224002 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc75x\" (UniqueName: \"kubernetes.io/projected/cdc57f0c-a9c1-4b48-9a08-209f3a27727f-kube-api-access-vc75x\") pod \"certified-operators-8dxcs\" (UID: \"cdc57f0c-a9c1-4b48-9a08-209f3a27727f\") " pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.240567 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.331351 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.463368 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:16:57 crc kubenswrapper[4853]: W1122 07:16:57.473927 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e241aed_043d_4b92_9f04_2a36511cff3b.slice/crio-a41dbaf8c5f6a4396bd27d36f769ca6a4ad4ea8ea2eb05b38104a822159fd768 WatchSource:0}: Error finding container a41dbaf8c5f6a4396bd27d36f769ca6a4ad4ea8ea2eb05b38104a822159fd768: Status 404 returned error can't find the container with id a41dbaf8c5f6a4396bd27d36f769ca6a4ad4ea8ea2eb05b38104a822159fd768 Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.552221 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dxcs"] Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.866364 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerID="5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658" exitCode=0 Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.866454 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerDied","Data":"5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658"} Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.866550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerStarted","Data":"a41dbaf8c5f6a4396bd27d36f769ca6a4ad4ea8ea2eb05b38104a822159fd768"} Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.872586 4853 generic.go:334] "Generic (PLEG): container finished" podID="c1265d82-d3bb-4d83-bb9e-05cbb5960004" containerID="c25362cf679d3947aff366474f5394b57aa109c36bc3ce946481fe1c5c282a86" exitCode=0 Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.872621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m28tt" event={"ID":"c1265d82-d3bb-4d83-bb9e-05cbb5960004","Type":"ContainerDied","Data":"c25362cf679d3947aff366474f5394b57aa109c36bc3ce946481fe1c5c282a86"} Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.883191 4853 generic.go:334] "Generic (PLEG): container finished" podID="dd697e52-9abd-4be8-a245-625d1dde804e" containerID="cc0774378b8b594a8790f267c01c78fbd1bfd23b96e518b6d9c3774977ccb8ee" exitCode=0 Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.883300 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqss" event={"ID":"dd697e52-9abd-4be8-a245-625d1dde804e","Type":"ContainerDied","Data":"cc0774378b8b594a8790f267c01c78fbd1bfd23b96e518b6d9c3774977ccb8ee"} Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.885735 4853 generic.go:334] "Generic (PLEG): container finished" podID="cdc57f0c-a9c1-4b48-9a08-209f3a27727f" containerID="dc15e23a5958ba9537c601fa47682b020fb1f4b6dfdaa5608f7b99b26beca984" exitCode=0 Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.885775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dxcs" event={"ID":"cdc57f0c-a9c1-4b48-9a08-209f3a27727f","Type":"ContainerDied","Data":"dc15e23a5958ba9537c601fa47682b020fb1f4b6dfdaa5608f7b99b26beca984"} Nov 22 07:16:57 crc kubenswrapper[4853]: I1122 07:16:57.885796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dxcs" event={"ID":"cdc57f0c-a9c1-4b48-9a08-209f3a27727f","Type":"ContainerStarted","Data":"98af1cb18384c7a3d7f856c67f3f5a6384f5ef9a7b9054ef1d1ba94d515ca8ce"} Nov 22 07:16:58 crc kubenswrapper[4853]: I1122 07:16:58.895367 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4mqss" event={"ID":"dd697e52-9abd-4be8-a245-625d1dde804e","Type":"ContainerStarted","Data":"b56c133a58821cb7ea3de87cfa63fec579431edd689c608fa12516582d1a4cd1"} Nov 22 07:16:58 crc kubenswrapper[4853]: I1122 07:16:58.897513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m28tt" event={"ID":"c1265d82-d3bb-4d83-bb9e-05cbb5960004","Type":"ContainerStarted","Data":"2b3ad3bea29943c44e9cea9ee3f0bc048b006035e8faf20c8263a91cd3e18456"} Nov 22 07:16:58 crc kubenswrapper[4853]: I1122 07:16:58.924196 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4mqss" podStartSLOduration=2.184287189 podStartE2EDuration="4.924174778s" podCreationTimestamp="2025-11-22 07:16:54 +0000 UTC" firstStartedPulling="2025-11-22 07:16:55.83265781 +0000 UTC m=+414.673280436" lastFinishedPulling="2025-11-22 07:16:58.572545399 +0000 UTC m=+417.413168025" observedRunningTime="2025-11-22 07:16:58.923180627 +0000 UTC m=+417.763803253" watchObservedRunningTime="2025-11-22 07:16:58.924174778 +0000 UTC m=+417.764797404" Nov 22 07:16:58 crc kubenswrapper[4853]: I1122 07:16:58.944465 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m28tt" podStartSLOduration=2.410539657 podStartE2EDuration="4.944438804s" podCreationTimestamp="2025-11-22 07:16:54 +0000 UTC" firstStartedPulling="2025-11-22 07:16:55.834393143 +0000 UTC m=+414.675015769" lastFinishedPulling="2025-11-22 07:16:58.36829229 +0000 UTC m=+417.208914916" observedRunningTime="2025-11-22 07:16:58.944416784 +0000 UTC m=+417.785039420" watchObservedRunningTime="2025-11-22 07:16:58.944438804 +0000 UTC m=+417.785061430" Nov 22 07:16:59 crc kubenswrapper[4853]: I1122 07:16:59.905003 4853 generic.go:334] "Generic (PLEG): container finished" podID="cdc57f0c-a9c1-4b48-9a08-209f3a27727f" containerID="c9f58d97bdec7ec3cecafb5659d0042bd14d7d53809cbba313f9dce492cd4dfd" exitCode=0 Nov 22 07:16:59 crc kubenswrapper[4853]: I1122 07:16:59.905067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dxcs" event={"ID":"cdc57f0c-a9c1-4b48-9a08-209f3a27727f","Type":"ContainerDied","Data":"c9f58d97bdec7ec3cecafb5659d0042bd14d7d53809cbba313f9dce492cd4dfd"} Nov 22 07:16:59 crc kubenswrapper[4853]: I1122 07:16:59.908194 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerID="80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc" exitCode=0 Nov 22 07:16:59 crc kubenswrapper[4853]: I1122 07:16:59.908378 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerDied","Data":"80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc"} Nov 22 07:17:00 crc kubenswrapper[4853]: I1122 07:17:00.917875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dxcs" event={"ID":"cdc57f0c-a9c1-4b48-9a08-209f3a27727f","Type":"ContainerStarted","Data":"8aa62f5980fbb0a8e2f41b6109887db14d97b69b3cc61dbfa76e136dfffb0374"} Nov 22 07:17:00 crc kubenswrapper[4853]: I1122 07:17:00.922439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerStarted","Data":"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4"} Nov 22 07:17:00 crc kubenswrapper[4853]: I1122 07:17:00.941588 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8dxcs" podStartSLOduration=2.474187406 podStartE2EDuration="4.941561964s" podCreationTimestamp="2025-11-22 07:16:56 +0000 UTC" firstStartedPulling="2025-11-22 07:16:57.887605948 +0000 UTC m=+416.728228564" lastFinishedPulling="2025-11-22 07:17:00.354980496 +0000 UTC m=+419.195603122" observedRunningTime="2025-11-22 07:17:00.940491111 +0000 UTC m=+419.781113737" watchObservedRunningTime="2025-11-22 07:17:00.941561964 +0000 UTC m=+419.782184590" Nov 22 07:17:00 crc kubenswrapper[4853]: I1122 07:17:00.968633 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tdfrh" podStartSLOduration=2.509747816 podStartE2EDuration="4.96859558s" podCreationTimestamp="2025-11-22 07:16:56 +0000 UTC" firstStartedPulling="2025-11-22 07:16:57.868431154 +0000 UTC m=+416.709053780" lastFinishedPulling="2025-11-22 07:17:00.327278918 +0000 UTC m=+419.167901544" observedRunningTime="2025-11-22 07:17:00.96471897 +0000 UTC m=+419.805341596" watchObservedRunningTime="2025-11-22 07:17:00.96859558 +0000 UTC m=+419.809218206" Nov 22 07:17:01 crc kubenswrapper[4853]: I1122 07:17:01.297565 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:17:01 crc kubenswrapper[4853]: I1122 07:17:01.297650 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.731460 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.732119 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.812625 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.972321 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.972404 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:17:04 crc kubenswrapper[4853]: I1122 07:17:04.993978 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4mqss" Nov 22 07:17:05 crc kubenswrapper[4853]: I1122 07:17:05.023398 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:17:06 crc kubenswrapper[4853]: I1122 07:17:06.020765 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m28tt" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.241278 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.242137 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.290825 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.332991 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.333066 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:17:07 crc kubenswrapper[4853]: I1122 07:17:07.383812 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:17:08 crc kubenswrapper[4853]: I1122 07:17:08.007440 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8dxcs" Nov 22 07:17:08 crc kubenswrapper[4853]: I1122 07:17:08.020110 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.108905 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l"] Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.110398 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.113010 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.114059 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.114512 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.114733 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.117991 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.124366 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l"] Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.320175 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.320255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.320289 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cp4r\" (UniqueName: \"kubernetes.io/projected/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-kube-api-access-5cp4r\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.421391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.421898 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.422030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cp4r\" (UniqueName: \"kubernetes.io/projected/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-kube-api-access-5cp4r\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.423099 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.430014 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.440473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cp4r\" (UniqueName: \"kubernetes.io/projected/8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7-kube-api-access-5cp4r\") pod \"cluster-monitoring-operator-6d5b84845-jfq5l\" (UID: \"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.734617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" Nov 22 07:17:23 crc kubenswrapper[4853]: I1122 07:17:23.948610 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l"] Nov 22 07:17:23 crc kubenswrapper[4853]: W1122 07:17:23.960739 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bca0ee1_ccb1_4dfd_8e7d_48e3b1152cd7.slice/crio-1dd55ad8474661407cf1b55a1e6c8b1088b6bb70c3432192f996e4814b63b3c8 WatchSource:0}: Error finding container 1dd55ad8474661407cf1b55a1e6c8b1088b6bb70c3432192f996e4814b63b3c8: Status 404 returned error can't find the container with id 1dd55ad8474661407cf1b55a1e6c8b1088b6bb70c3432192f996e4814b63b3c8 Nov 22 07:17:24 crc kubenswrapper[4853]: I1122 07:17:24.056393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" event={"ID":"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7","Type":"ContainerStarted","Data":"1dd55ad8474661407cf1b55a1e6c8b1088b6bb70c3432192f996e4814b63b3c8"} Nov 22 07:17:28 crc kubenswrapper[4853]: I1122 07:17:28.919772 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2czlg"] Nov 22 07:17:28 crc kubenswrapper[4853]: I1122 07:17:28.921298 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:28 crc kubenswrapper[4853]: I1122 07:17:28.933239 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2czlg"] Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007074 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e55ab2e-354c-40d3-b521-fc9558daef7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-tls\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e55ab2e-354c-40d3-b521-fc9558daef7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007208 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-bound-sa-token\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007275 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-trusted-ca\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007333 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2k42\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-kube-api-access-k2k42\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.007363 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-certificates\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.040467 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx"] Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.041358 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.044661 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.045034 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-hldbb" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.045055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.056843 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx"] Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.089928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" event={"ID":"8bca0ee1-ccb1-4dfd-8e7d-48e3b1152cd7","Type":"ContainerStarted","Data":"72f193ab97a1e460055907bdb9663723629745d426c717829bb5d03667e24223"} Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.107916 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-trusted-ca\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.107985 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2k42\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-kube-api-access-k2k42\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108011 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-certificates\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108042 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e55ab2e-354c-40d3-b521-fc9558daef7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-tls\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e55ab2e-354c-40d3-b521-fc9558daef7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108117 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-bound-sa-token\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.108157 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tltvx\" (UID: \"7b0b38d8-e5ea-41a6-8566-8d24ea403083\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.109628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e55ab2e-354c-40d3-b521-fc9558daef7f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.109653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-certificates\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.109875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e55ab2e-354c-40d3-b521-fc9558daef7f-trusted-ca\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.111103 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-jfq5l" podStartSLOduration=2.011090937 podStartE2EDuration="6.111077421s" podCreationTimestamp="2025-11-22 07:17:23 +0000 UTC" firstStartedPulling="2025-11-22 07:17:23.964092543 +0000 UTC m=+442.804715169" lastFinishedPulling="2025-11-22 07:17:28.064079017 +0000 UTC m=+446.904701653" observedRunningTime="2025-11-22 07:17:29.107953736 +0000 UTC m=+447.948576382" watchObservedRunningTime="2025-11-22 07:17:29.111077421 +0000 UTC m=+447.951700047" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.115697 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e55ab2e-354c-40d3-b521-fc9558daef7f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.115778 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-registry-tls\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.126521 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-bound-sa-token\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.127016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2k42\" (UniqueName: \"kubernetes.io/projected/4e55ab2e-354c-40d3-b521-fc9558daef7f-kube-api-access-k2k42\") pod \"image-registry-66df7c8f76-2czlg\" (UID: \"4e55ab2e-354c-40d3-b521-fc9558daef7f\") " pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.208710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tltvx\" (UID: \"7b0b38d8-e5ea-41a6-8566-8d24ea403083\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:29 crc kubenswrapper[4853]: E1122 07:17:29.208908 4853 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Nov 22 07:17:29 crc kubenswrapper[4853]: E1122 07:17:29.208980 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates podName:7b0b38d8-e5ea-41a6-8566-8d24ea403083 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:29.708955241 +0000 UTC m=+448.549577867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-tltvx" (UID: "7b0b38d8-e5ea-41a6-8566-8d24ea403083") : secret "prometheus-operator-admission-webhook-tls" not found Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.241723 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.439535 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-2czlg"] Nov 22 07:17:29 crc kubenswrapper[4853]: W1122 07:17:29.446995 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e55ab2e_354c_40d3_b521_fc9558daef7f.slice/crio-400ad812c0d2a19e8972cbfbb21d3e024a2000a987531d77b66671eae5289662 WatchSource:0}: Error finding container 400ad812c0d2a19e8972cbfbb21d3e024a2000a987531d77b66671eae5289662: Status 404 returned error can't find the container with id 400ad812c0d2a19e8972cbfbb21d3e024a2000a987531d77b66671eae5289662 Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.717802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tltvx\" (UID: \"7b0b38d8-e5ea-41a6-8566-8d24ea403083\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.724315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/7b0b38d8-e5ea-41a6-8566-8d24ea403083-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-tltvx\" (UID: \"7b0b38d8-e5ea-41a6-8566-8d24ea403083\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:29 crc kubenswrapper[4853]: I1122 07:17:29.973256 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:30 crc kubenswrapper[4853]: I1122 07:17:30.098666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" event={"ID":"4e55ab2e-354c-40d3-b521-fc9558daef7f","Type":"ContainerStarted","Data":"9c609d18714c54906497c4a1d7fad5b842837a3cbc651144eb2eb2153a740ff6"} Nov 22 07:17:30 crc kubenswrapper[4853]: I1122 07:17:30.099206 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" event={"ID":"4e55ab2e-354c-40d3-b521-fc9558daef7f","Type":"ContainerStarted","Data":"400ad812c0d2a19e8972cbfbb21d3e024a2000a987531d77b66671eae5289662"} Nov 22 07:17:30 crc kubenswrapper[4853]: I1122 07:17:30.118571 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" podStartSLOduration=2.1185483019999998 podStartE2EDuration="2.118548302s" podCreationTimestamp="2025-11-22 07:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:17:30.116827135 +0000 UTC m=+448.957449761" watchObservedRunningTime="2025-11-22 07:17:30.118548302 +0000 UTC m=+448.959170918" Nov 22 07:17:30 crc kubenswrapper[4853]: I1122 07:17:30.171737 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx"] Nov 22 07:17:30 crc kubenswrapper[4853]: W1122 07:17:30.179167 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b0b38d8_e5ea_41a6_8566_8d24ea403083.slice/crio-372f9cb9d7c5109f787ec20f932f6aab57a5f587596672b6ce96366fef8d41bb WatchSource:0}: Error finding container 372f9cb9d7c5109f787ec20f932f6aab57a5f587596672b6ce96366fef8d41bb: Status 404 returned error can't find the container with id 372f9cb9d7c5109f787ec20f932f6aab57a5f587596672b6ce96366fef8d41bb Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.107421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" event={"ID":"7b0b38d8-e5ea-41a6-8566-8d24ea403083","Type":"ContainerStarted","Data":"372f9cb9d7c5109f787ec20f932f6aab57a5f587596672b6ce96366fef8d41bb"} Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.107951 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.298014 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.298211 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.298274 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.299236 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:17:31 crc kubenswrapper[4853]: I1122 07:17:31.299346 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572" gracePeriod=600 Nov 22 07:17:32 crc kubenswrapper[4853]: I1122 07:17:32.117314 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572" exitCode=0 Nov 22 07:17:32 crc kubenswrapper[4853]: I1122 07:17:32.117514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572"} Nov 22 07:17:32 crc kubenswrapper[4853]: I1122 07:17:32.118169 4853 scope.go:117] "RemoveContainer" containerID="28cf28a4f0e05df5ad55eff2ab13e375fb1d5725f1d0e3b85c7c9cd785cf4453" Nov 22 07:17:33 crc kubenswrapper[4853]: I1122 07:17:33.128969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" event={"ID":"7b0b38d8-e5ea-41a6-8566-8d24ea403083","Type":"ContainerStarted","Data":"a4b71e254e4ba6b78817e62f7fa5bdf9e0040327fed9dff40dcb92e4c3d27498"} Nov 22 07:17:33 crc kubenswrapper[4853]: I1122 07:17:33.129551 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:33 crc kubenswrapper[4853]: I1122 07:17:33.132194 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684"} Nov 22 07:17:33 crc kubenswrapper[4853]: I1122 07:17:33.137063 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" Nov 22 07:17:33 crc kubenswrapper[4853]: I1122 07:17:33.152845 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-tltvx" podStartSLOduration=1.481962415 podStartE2EDuration="4.15281779s" podCreationTimestamp="2025-11-22 07:17:29 +0000 UTC" firstStartedPulling="2025-11-22 07:17:30.181443544 +0000 UTC m=+449.022066170" lastFinishedPulling="2025-11-22 07:17:32.852298919 +0000 UTC m=+451.692921545" observedRunningTime="2025-11-22 07:17:33.149392613 +0000 UTC m=+451.990015259" watchObservedRunningTime="2025-11-22 07:17:33.15281779 +0000 UTC m=+451.993440436" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.103356 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q252d"] Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.105150 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.108387 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.108611 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.108690 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.108837 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-znm58" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.116500 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q252d"] Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.179793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.179851 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.179883 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt54r\" (UniqueName: \"kubernetes.io/projected/f0396740-4da5-4d03-8961-6ce6473dbb09-kube-api-access-zt54r\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.179960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f0396740-4da5-4d03-8961-6ce6473dbb09-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.281709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f0396740-4da5-4d03-8961-6ce6473dbb09-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.281932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.282008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.282043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt54r\" (UniqueName: \"kubernetes.io/projected/f0396740-4da5-4d03-8961-6ce6473dbb09-kube-api-access-zt54r\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.283081 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f0396740-4da5-4d03-8961-6ce6473dbb09-metrics-client-ca\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.294222 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.294675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f0396740-4da5-4d03-8961-6ce6473dbb09-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.302906 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt54r\" (UniqueName: \"kubernetes.io/projected/f0396740-4da5-4d03-8961-6ce6473dbb09-kube-api-access-zt54r\") pod \"prometheus-operator-db54df47d-q252d\" (UID: \"f0396740-4da5-4d03-8961-6ce6473dbb09\") " pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.473206 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" Nov 22 07:17:34 crc kubenswrapper[4853]: I1122 07:17:34.706061 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-q252d"] Nov 22 07:17:35 crc kubenswrapper[4853]: I1122 07:17:35.157507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" event={"ID":"f0396740-4da5-4d03-8961-6ce6473dbb09","Type":"ContainerStarted","Data":"b84b98ab4070244be7480fb69bd5b4ba938daad8760ff27e77353828d7b0ce6f"} Nov 22 07:17:37 crc kubenswrapper[4853]: I1122 07:17:37.170970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" event={"ID":"f0396740-4da5-4d03-8961-6ce6473dbb09","Type":"ContainerStarted","Data":"fb1e8e417a747aff8302b926f71fd72238c3c7a6a998266e4680e150f138ac05"} Nov 22 07:17:37 crc kubenswrapper[4853]: I1122 07:17:37.171847 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" event={"ID":"f0396740-4da5-4d03-8961-6ce6473dbb09","Type":"ContainerStarted","Data":"6958d5bccfd2d97d6ab051e03a362c3e40e24ca025e7e517c58de1ffcde1a29a"} Nov 22 07:17:37 crc kubenswrapper[4853]: I1122 07:17:37.201184 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-q252d" podStartSLOduration=1.770875076 podStartE2EDuration="3.20116247s" podCreationTimestamp="2025-11-22 07:17:34 +0000 UTC" firstStartedPulling="2025-11-22 07:17:34.718380606 +0000 UTC m=+453.559003232" lastFinishedPulling="2025-11-22 07:17:36.148668 +0000 UTC m=+454.989290626" observedRunningTime="2025-11-22 07:17:37.198011461 +0000 UTC m=+456.038634077" watchObservedRunningTime="2025-11-22 07:17:37.20116247 +0000 UTC m=+456.041785096" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.479603 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-949qz"] Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.481908 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.482873 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-th96f"] Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.484195 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.484388 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.489428 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.490646 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s9zr5" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.490655 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-5wdkj" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.493548 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.497344 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.499171 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-949qz"] Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.544705 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4"] Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.546314 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.548329 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.549028 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.549107 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.549184 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-nqwb9" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-sys\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-root\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562507 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562540 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-textfile\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562696 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562774 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55914b5e-a464-4996-850c-aaf4d61800c8-metrics-client-ca\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562806 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a4197f2a-c240-4173-9afe-06b85e9598fe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-wtmp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562933 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5krp\" (UniqueName: \"kubernetes.io/projected/55914b5e-a464-4996-850c-aaf4d61800c8-kube-api-access-r5krp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562953 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.562975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxpp\" (UniqueName: \"kubernetes.io/projected/a4197f2a-c240-4173-9afe-06b85e9598fe-kube-api-access-fbxpp\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.576436 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4"] Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.664800 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.664866 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-root\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.664900 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3a708b5b-c519-4aee-b9a8-0991b7f464b5-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.664930 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5578\" (UniqueName: \"kubernetes.io/projected/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-api-access-n5578\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-root\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665285 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-textfile\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665440 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665476 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55914b5e-a464-4996-850c-aaf4d61800c8-metrics-client-ca\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665565 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665600 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a4197f2a-c240-4173-9afe-06b85e9598fe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665696 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-wtmp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: E1122 07:17:39.665730 4853 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665837 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5krp\" (UniqueName: \"kubernetes.io/projected/55914b5e-a464-4996-850c-aaf4d61800c8-kube-api-access-r5krp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: E1122 07:17:39.665864 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls podName:55914b5e-a464-4996-850c-aaf4d61800c8 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:40.16583419 +0000 UTC m=+459.006456816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls") pod "node-exporter-th96f" (UID: "55914b5e-a464-4996-850c-aaf4d61800c8") : secret "node-exporter-tls" not found Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxpp\" (UniqueName: \"kubernetes.io/projected/a4197f2a-c240-4173-9afe-06b85e9598fe-kube-api-access-fbxpp\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.665986 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-textfile\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.666311 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-sys\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.666544 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-wtmp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.666574 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/55914b5e-a464-4996-850c-aaf4d61800c8-sys\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.666547 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55914b5e-a464-4996-850c-aaf4d61800c8-metrics-client-ca\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.668619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a4197f2a-c240-4173-9afe-06b85e9598fe-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.675171 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.678440 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.683074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a4197f2a-c240-4173-9afe-06b85e9598fe-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.684722 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5krp\" (UniqueName: \"kubernetes.io/projected/55914b5e-a464-4996-850c-aaf4d61800c8-kube-api-access-r5krp\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.686374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxpp\" (UniqueName: \"kubernetes.io/projected/a4197f2a-c240-4173-9afe-06b85e9598fe-kube-api-access-fbxpp\") pod \"openshift-state-metrics-566fddb674-949qz\" (UID: \"a4197f2a-c240-4173-9afe-06b85e9598fe\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.772706 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.773611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.773840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.774171 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3a708b5b-c519-4aee-b9a8-0991b7f464b5-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.774307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5578\" (UniqueName: \"kubernetes.io/projected/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-api-access-n5578\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.774782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.774924 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/3a708b5b-c519-4aee-b9a8-0991b7f464b5-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.775210 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.775244 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3a708b5b-c519-4aee-b9a8-0991b7f464b5-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.778002 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.782487 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.800881 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.814560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5578\" (UniqueName: \"kubernetes.io/projected/3a708b5b-c519-4aee-b9a8-0991b7f464b5-kube-api-access-n5578\") pod \"kube-state-metrics-777cb5bd5d-hj7h4\" (UID: \"3a708b5b-c519-4aee-b9a8-0991b7f464b5\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:39 crc kubenswrapper[4853]: I1122 07:17:39.864741 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.183936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.188505 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/55914b5e-a464-4996-850c-aaf4d61800c8-node-exporter-tls\") pod \"node-exporter-th96f\" (UID: \"55914b5e-a464-4996-850c-aaf4d61800c8\") " pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.263598 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-949qz"] Nov 22 07:17:40 crc kubenswrapper[4853]: W1122 07:17:40.265636 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4197f2a_c240_4173_9afe_06b85e9598fe.slice/crio-3a1ceeb40504d46c3b184248ca5222b62e1d370dca0a7dd6c9e8e8a61df4ae1f WatchSource:0}: Error finding container 3a1ceeb40504d46c3b184248ca5222b62e1d370dca0a7dd6c9e8e8a61df4ae1f: Status 404 returned error can't find the container with id 3a1ceeb40504d46c3b184248ca5222b62e1d370dca0a7dd6c9e8e8a61df4ae1f Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.347624 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4"] Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.410683 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-th96f" Nov 22 07:17:40 crc kubenswrapper[4853]: W1122 07:17:40.433535 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55914b5e_a464_4996_850c_aaf4d61800c8.slice/crio-59952eee68ba4a3ff509b42e1c8305227e3c41c1c37ffae95d315c45c2d40fd8 WatchSource:0}: Error finding container 59952eee68ba4a3ff509b42e1c8305227e3c41c1c37ffae95d315c45c2d40fd8: Status 404 returned error can't find the container with id 59952eee68ba4a3ff509b42e1c8305227e3c41c1c37ffae95d315c45c2d40fd8 Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.617601 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.621416 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.624807 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.625014 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.625311 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.625446 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.625989 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-fb9kf" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.626159 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.626997 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.627486 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.635932 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.638494 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.693060 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.693295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.693487 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.694068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-volume\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.694211 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.694626 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvzvx\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-kube-api-access-nvzvx\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.694868 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.695010 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.695177 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-web-config\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.695356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.695510 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.695677 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-out\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.797382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-out\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.797995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798025 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798058 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798091 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-volume\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798108 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvzvx\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-kube-api-access-nvzvx\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798153 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798180 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-web-config\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.798250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.800454 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.800632 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.801331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e2e2352f-fe33-40ab-83fa-91e356ce1693-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.806402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-volume\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.807234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.807642 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.808362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.811728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e2e2352f-fe33-40ab-83fa-91e356ce1693-config-out\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.816540 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.816588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e2e2352f-fe33-40ab-83fa-91e356ce1693-web-config\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.816920 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.824026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvzvx\" (UniqueName: \"kubernetes.io/projected/e2e2352f-fe33-40ab-83fa-91e356ce1693-kube-api-access-nvzvx\") pod \"alertmanager-main-0\" (UID: \"e2e2352f-fe33-40ab-83fa-91e356ce1693\") " pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:40 crc kubenswrapper[4853]: I1122 07:17:40.942741 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.173636 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 22 07:17:41 crc kubenswrapper[4853]: W1122 07:17:41.180021 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2e2352f_fe33_40ab_83fa_91e356ce1693.slice/crio-7031a2865a231b0a4492d307f2ecc789b2c092c28dc43ccb43b5acdd810934a1 WatchSource:0}: Error finding container 7031a2865a231b0a4492d307f2ecc789b2c092c28dc43ccb43b5acdd810934a1: Status 404 returned error can't find the container with id 7031a2865a231b0a4492d307f2ecc789b2c092c28dc43ccb43b5acdd810934a1 Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.193633 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" event={"ID":"3a708b5b-c519-4aee-b9a8-0991b7f464b5","Type":"ContainerStarted","Data":"49831b23f62489896ab96979df3340046d6c2679f2912b54f680afc975704fe4"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.195036 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-th96f" event={"ID":"55914b5e-a464-4996-850c-aaf4d61800c8","Type":"ContainerStarted","Data":"59952eee68ba4a3ff509b42e1c8305227e3c41c1c37ffae95d315c45c2d40fd8"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.197013 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" event={"ID":"a4197f2a-c240-4173-9afe-06b85e9598fe","Type":"ContainerStarted","Data":"b89ab993c6859c6a4d75ac198fbde17690c11c146c939df7480abca52ddefe44"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.197111 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" event={"ID":"a4197f2a-c240-4173-9afe-06b85e9598fe","Type":"ContainerStarted","Data":"bd90d14a14125465cbaeb5e14bd040214833942ba7b05a80cbd0d3845f9d1f8d"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.197168 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" event={"ID":"a4197f2a-c240-4173-9afe-06b85e9598fe","Type":"ContainerStarted","Data":"3a1ceeb40504d46c3b184248ca5222b62e1d370dca0a7dd6c9e8e8a61df4ae1f"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.198111 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"7031a2865a231b0a4492d307f2ecc789b2c092c28dc43ccb43b5acdd810934a1"} Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.522100 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-57797c7b65-9s8jq"] Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.526303 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.552767 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.553179 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-lpvkg" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.553356 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.553522 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-8lh7llubl2l7o" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.554373 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.559396 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.562209 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.568097 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57797c7b65-9s8jq"] Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.613351 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-grpc-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.613925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.613977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6ccfdc-cb2e-419d-931f-723fd4895077-metrics-client-ca\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.614007 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.614031 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdb5\" (UniqueName: \"kubernetes.io/projected/4a6ccfdc-cb2e-419d-931f-723fd4895077-kube-api-access-drdb5\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.614104 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.614130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.614178 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.715900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.715966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-grpc-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716123 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6ccfdc-cb2e-419d-931f-723fd4895077-metrics-client-ca\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.716174 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdb5\" (UniqueName: \"kubernetes.io/projected/4a6ccfdc-cb2e-419d-931f-723fd4895077-kube-api-access-drdb5\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.718462 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/4a6ccfdc-cb2e-419d-931f-723fd4895077-metrics-client-ca\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.723310 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-grpc-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.723298 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.723874 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.724281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-tls\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.725687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.729480 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/4a6ccfdc-cb2e-419d-931f-723fd4895077-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.734737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdb5\" (UniqueName: \"kubernetes.io/projected/4a6ccfdc-cb2e-419d-931f-723fd4895077-kube-api-access-drdb5\") pod \"thanos-querier-57797c7b65-9s8jq\" (UID: \"4a6ccfdc-cb2e-419d-931f-723fd4895077\") " pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:41 crc kubenswrapper[4853]: I1122 07:17:41.877284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:43 crc kubenswrapper[4853]: I1122 07:17:43.275322 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-57797c7b65-9s8jq"] Nov 22 07:17:43 crc kubenswrapper[4853]: W1122 07:17:43.514871 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a6ccfdc_cb2e_419d_931f_723fd4895077.slice/crio-ccea469f35efaef3658a302023eca0dc7d1c48d5c276bb26eebafd5a373e2e9a WatchSource:0}: Error finding container ccea469f35efaef3658a302023eca0dc7d1c48d5c276bb26eebafd5a373e2e9a: Status 404 returned error can't find the container with id ccea469f35efaef3658a302023eca0dc7d1c48d5c276bb26eebafd5a373e2e9a Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.236617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"ccea469f35efaef3658a302023eca0dc7d1c48d5c276bb26eebafd5a373e2e9a"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.241245 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" event={"ID":"3a708b5b-c519-4aee-b9a8-0991b7f464b5","Type":"ContainerStarted","Data":"ba841c27692ccd52388958a8770a974385cc35a2bc1fb06aeb99df19bbf49554"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.241287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" event={"ID":"3a708b5b-c519-4aee-b9a8-0991b7f464b5","Type":"ContainerStarted","Data":"93230c7bce6d949104c6eec48da9693994e3d0442fa940faa3b9a5fa87ee944b"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.241323 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" event={"ID":"3a708b5b-c519-4aee-b9a8-0991b7f464b5","Type":"ContainerStarted","Data":"53f38da2d29ea7d1014bb0e78cd91950597ed519578aaac787ed0ab07e4c7f21"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.256627 4853 generic.go:334] "Generic (PLEG): container finished" podID="55914b5e-a464-4996-850c-aaf4d61800c8" containerID="b62ecdbdc7905c1f1305fb4919ee0e53812eaafa2e5c3db2c6b5dba4d8dd56ce" exitCode=0 Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.256790 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-th96f" event={"ID":"55914b5e-a464-4996-850c-aaf4d61800c8","Type":"ContainerDied","Data":"b62ecdbdc7905c1f1305fb4919ee0e53812eaafa2e5c3db2c6b5dba4d8dd56ce"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.267721 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-hj7h4" podStartSLOduration=2.13428943 podStartE2EDuration="5.267699755s" podCreationTimestamp="2025-11-22 07:17:39 +0000 UTC" firstStartedPulling="2025-11-22 07:17:40.358995509 +0000 UTC m=+459.199618125" lastFinishedPulling="2025-11-22 07:17:43.492405824 +0000 UTC m=+462.333028450" observedRunningTime="2025-11-22 07:17:44.26542538 +0000 UTC m=+463.106048006" watchObservedRunningTime="2025-11-22 07:17:44.267699755 +0000 UTC m=+463.108322381" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.280222 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" event={"ID":"a4197f2a-c240-4173-9afe-06b85e9598fe","Type":"ContainerStarted","Data":"0870833274d536759bb00780450fab8ce14c09f3d2f840fc52d152fc25d91993"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.310939 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2e2352f-fe33-40ab-83fa-91e356ce1693" containerID="7a633f1c267f8fbfc5730ec0682e44a4654695d02e8b11badaca718210ddc07a" exitCode=0 Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.311006 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerDied","Data":"7a633f1c267f8fbfc5730ec0682e44a4654695d02e8b11badaca718210ddc07a"} Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.343781 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-949qz" podStartSLOduration=3.379333045 podStartE2EDuration="5.343735203s" podCreationTimestamp="2025-11-22 07:17:39 +0000 UTC" firstStartedPulling="2025-11-22 07:17:41.111696838 +0000 UTC m=+459.952319464" lastFinishedPulling="2025-11-22 07:17:43.076098996 +0000 UTC m=+461.916721622" observedRunningTime="2025-11-22 07:17:44.336731444 +0000 UTC m=+463.177354080" watchObservedRunningTime="2025-11-22 07:17:44.343735203 +0000 UTC m=+463.184357829" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.376908 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.378008 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.385533 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.493708 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494033 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npql6\" (UniqueName: \"kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494423 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.494470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.595829 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npql6\" (UniqueName: \"kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.595911 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.595941 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.595982 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.596036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.596064 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.596130 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.597801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.599674 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.600530 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.602051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.608847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.609043 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.626901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npql6\" (UniqueName: \"kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6\") pod \"console-75447c4646-f42jp\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.717958 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.924902 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-788dbc4c78-5xql9"] Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.926040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.928474 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.930548 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-jxzs4" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.930776 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.930983 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7uuugplj1qv5i" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.931881 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.931959 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.935921 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-788dbc4c78-5xql9"] Nov 22 07:17:44 crc kubenswrapper[4853]: I1122 07:17:44.947884 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:17:44 crc kubenswrapper[4853]: W1122 07:17:44.969131 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7614e6a_b9fe_4e08_9a68_28ea9b652739.slice/crio-030bb7a179f60fc6a829c962b6327b855e100cab7005a353f3b93a4d01386227 WatchSource:0}: Error finding container 030bb7a179f60fc6a829c962b6327b855e100cab7005a353f3b93a4d01386227: Status 404 returned error can't find the container with id 030bb7a179f60fc6a829c962b6327b855e100cab7005a353f3b93a4d01386227 Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105201 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-metrics-server-audit-profiles\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105291 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-client-certs\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv5cw\" (UniqueName: \"kubernetes.io/projected/a24285e8-0933-4eb4-bd16-1687c9d23cc0-kube-api-access-tv5cw\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105388 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a24285e8-0933-4eb4-bd16-1687c9d23cc0-audit-log\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105434 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-server-tls\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.105501 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-client-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.207538 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-server-tls\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.207599 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.207645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-client-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.207697 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-metrics-server-audit-profiles\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.209024 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.209374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/a24285e8-0933-4eb4-bd16-1687c9d23cc0-metrics-server-audit-profiles\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.209436 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-client-certs\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.209519 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv5cw\" (UniqueName: \"kubernetes.io/projected/a24285e8-0933-4eb4-bd16-1687c9d23cc0-kube-api-access-tv5cw\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.218224 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a24285e8-0933-4eb4-bd16-1687c9d23cc0-audit-log\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.218343 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/a24285e8-0933-4eb4-bd16-1687c9d23cc0-audit-log\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.221029 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-client-certs\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.223452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-secret-metrics-server-tls\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.224885 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a24285e8-0933-4eb4-bd16-1687c9d23cc0-client-ca-bundle\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.231009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv5cw\" (UniqueName: \"kubernetes.io/projected/a24285e8-0933-4eb4-bd16-1687c9d23cc0-kube-api-access-tv5cw\") pod \"metrics-server-788dbc4c78-5xql9\" (UID: \"a24285e8-0933-4eb4-bd16-1687c9d23cc0\") " pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.254432 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.260280 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn"] Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.261136 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.266572 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.266821 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.291836 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn"] Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.330782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-th96f" event={"ID":"55914b5e-a464-4996-850c-aaf4d61800c8","Type":"ContainerStarted","Data":"649cbabc98b63a5f220d1e149a9fa73d727c9643f2fa0cfee334e8c1e1ae0f46"} Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.331201 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-th96f" event={"ID":"55914b5e-a464-4996-850c-aaf4d61800c8","Type":"ContainerStarted","Data":"a0b05ca92463a98510e9b5fdc2e8a6a9319ce6a4cfc7df7b74c184bbc2957b9e"} Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.335664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75447c4646-f42jp" event={"ID":"d7614e6a-b9fe-4e08-9a68-28ea9b652739","Type":"ContainerStarted","Data":"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80"} Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.335732 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75447c4646-f42jp" event={"ID":"d7614e6a-b9fe-4e08-9a68-28ea9b652739","Type":"ContainerStarted","Data":"030bb7a179f60fc6a829c962b6327b855e100cab7005a353f3b93a4d01386227"} Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.351508 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-th96f" podStartSLOduration=3.768103513 podStartE2EDuration="6.351484873s" podCreationTimestamp="2025-11-22 07:17:39 +0000 UTC" firstStartedPulling="2025-11-22 07:17:40.449454927 +0000 UTC m=+459.290077553" lastFinishedPulling="2025-11-22 07:17:43.032836287 +0000 UTC m=+461.873458913" observedRunningTime="2025-11-22 07:17:45.349531047 +0000 UTC m=+464.190153683" watchObservedRunningTime="2025-11-22 07:17:45.351484873 +0000 UTC m=+464.192107499" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.388170 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75447c4646-f42jp" podStartSLOduration=1.388140234 podStartE2EDuration="1.388140234s" podCreationTimestamp="2025-11-22 07:17:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:17:45.377391938 +0000 UTC m=+464.218014574" watchObservedRunningTime="2025-11-22 07:17:45.388140234 +0000 UTC m=+464.228762860" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.421103 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3c801a6a-2675-4771-a51e-60aa0a61bbee-monitoring-plugin-cert\") pod \"monitoring-plugin-7466549cc8-q8xnn\" (UID: \"3c801a6a-2675-4771-a51e-60aa0a61bbee\") " pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.523649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3c801a6a-2675-4771-a51e-60aa0a61bbee-monitoring-plugin-cert\") pod \"monitoring-plugin-7466549cc8-q8xnn\" (UID: \"3c801a6a-2675-4771-a51e-60aa0a61bbee\") " pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.532420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/3c801a6a-2675-4771-a51e-60aa0a61bbee-monitoring-plugin-cert\") pod \"monitoring-plugin-7466549cc8-q8xnn\" (UID: \"3c801a6a-2675-4771-a51e-60aa0a61bbee\") " pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.656898 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.764740 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-788dbc4c78-5xql9"] Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.880810 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.883365 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.885907 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.886152 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.887912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.888391 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.888771 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.889168 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.890158 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bnjnq" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.890316 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-d766p4ns68psj" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.890447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.890664 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.895547 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.919676 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.932263 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 22 07:17:45 crc kubenswrapper[4853]: I1122 07:17:45.943184 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.045889 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqnd\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-kube-api-access-4fqnd\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.045956 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.045982 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046026 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046083 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046116 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-config-out\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046221 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-web-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046279 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046301 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.046367 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147545 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fqnd\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-kube-api-access-4fqnd\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147597 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147660 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147682 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147698 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147781 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-config-out\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147838 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-web-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147898 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.147952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.150128 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.154044 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.154046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.155074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.157177 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.157418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.157519 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.157635 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-web-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.158571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.160158 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aa25b342-38ae-4493-8129-710611d886fa-config-out\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.160268 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.161004 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-config\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.163118 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aa25b342-38ae-4493-8129-710611d886fa-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.164032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.164659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.165841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.167997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fqnd\" (UniqueName: \"kubernetes.io/projected/aa25b342-38ae-4493-8129-710611d886fa-kube-api-access-4fqnd\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.180686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/aa25b342-38ae-4493-8129-710611d886fa-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"aa25b342-38ae-4493-8129-710611d886fa\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:46 crc kubenswrapper[4853]: I1122 07:17:46.220592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:17:47 crc kubenswrapper[4853]: I1122 07:17:47.342939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn"] Nov 22 07:17:47 crc kubenswrapper[4853]: I1122 07:17:47.354707 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"52d5cf064bbefc48c9b6ccf997e87cbd3cdf367f8382975bcb51af49bf108794"} Nov 22 07:17:47 crc kubenswrapper[4853]: I1122 07:17:47.356057 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" event={"ID":"a24285e8-0933-4eb4-bd16-1687c9d23cc0","Type":"ContainerStarted","Data":"b146efe53eda9fa84b5122b7caf425546e11a7b0272dec70fe1398fd1aea20b9"} Nov 22 07:17:47 crc kubenswrapper[4853]: I1122 07:17:47.358620 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"70cf7ad2ea1924af5ebcfa1bb74b18aae05b3889c0dc0b2056779920c270bc65"} Nov 22 07:17:47 crc kubenswrapper[4853]: W1122 07:17:47.385897 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c801a6a_2675_4771_a51e_60aa0a61bbee.slice/crio-97c09959ffc05325f2b8195d8f5aa20710dceafca7d7ed3e10818b4484a12979 WatchSource:0}: Error finding container 97c09959ffc05325f2b8195d8f5aa20710dceafca7d7ed3e10818b4484a12979: Status 404 returned error can't find the container with id 97c09959ffc05325f2b8195d8f5aa20710dceafca7d7ed3e10818b4484a12979 Nov 22 07:17:47 crc kubenswrapper[4853]: I1122 07:17:47.421452 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 22 07:17:47 crc kubenswrapper[4853]: W1122 07:17:47.428689 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa25b342_38ae_4493_8129_710611d886fa.slice/crio-09f78a9cbd0abe1ca1115da2019cd36de38302085f79cb29f632f8cf77877fc9 WatchSource:0}: Error finding container 09f78a9cbd0abe1ca1115da2019cd36de38302085f79cb29f632f8cf77877fc9: Status 404 returned error can't find the container with id 09f78a9cbd0abe1ca1115da2019cd36de38302085f79cb29f632f8cf77877fc9 Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.376257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"39005b2e9d4f7b2736a6f8693ae235b160ef23a32beca7bad8b978b06ad78cd7"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.376351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"44396d6a4857d673180402b96da1cd63d0562150a862b82b54495e1f8e5df313"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.380028 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"e6b04eeb9bc2ec0c00f2daf1d42d49f0b98fe0055c9a4412b90591d87bc46478"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.380064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"fd13f80dd62d85d6133ae217e988f54dba7103541a99b11af71d9c824fbca58c"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.380080 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"f71c4e81e112912c569aaf8bc335b4119d4193f2635651a30d419d70769f8a88"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.380094 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"e2a06f3c234ab5fff3f7a8085a1332c90e43a24ef8d6dfe9bdef463f8068a7c0"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.381845 4853 generic.go:334] "Generic (PLEG): container finished" podID="aa25b342-38ae-4493-8129-710611d886fa" containerID="cafb946ac66bce1813fb6c178d359996e6f66d6839fce089cc346917d7a038f8" exitCode=0 Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.381912 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerDied","Data":"cafb946ac66bce1813fb6c178d359996e6f66d6839fce089cc346917d7a038f8"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.381937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"09f78a9cbd0abe1ca1115da2019cd36de38302085f79cb29f632f8cf77877fc9"} Nov 22 07:17:48 crc kubenswrapper[4853]: I1122 07:17:48.384644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" event={"ID":"3c801a6a-2675-4771-a51e-60aa0a61bbee","Type":"ContainerStarted","Data":"97c09959ffc05325f2b8195d8f5aa20710dceafca7d7ed3e10818b4484a12979"} Nov 22 07:17:49 crc kubenswrapper[4853]: I1122 07:17:49.248364 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-2czlg" Nov 22 07:17:49 crc kubenswrapper[4853]: I1122 07:17:49.317162 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.412712 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"a6ce9da734a32f9081e249d589bd9fcf89de12d555ef36999e30deaa9be2f6f7"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.413492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"a9729aa59634e3afe3ecab07ccf995a9207ff15a0659aec5ebab8aa8a65e197b"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.413510 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" event={"ID":"4a6ccfdc-cb2e-419d-931f-723fd4895077","Type":"ContainerStarted","Data":"ec13bbd9b77a347c06ee08cd9b35649b43a22334b9d1405a84157750309af22c"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.413556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.420284 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e2e2352f-fe33-40ab-83fa-91e356ce1693","Type":"ContainerStarted","Data":"ca8e0841bbb6b7fc1920adf65df381e1a77be072fa97c6a1548fdee3f1b1192f"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.422839 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" event={"ID":"3c801a6a-2675-4771-a51e-60aa0a61bbee","Type":"ContainerStarted","Data":"2bab48593e93b00b558d031d3823638fc6848c9a8eb6fe3b5812f4691b69fa5b"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.424910 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.426594 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" event={"ID":"a24285e8-0933-4eb4-bd16-1687c9d23cc0","Type":"ContainerStarted","Data":"19945c665d0246ed9d14143dcd07c6071c98517ba4de7801d695c2e4538019f4"} Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.431035 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.441134 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" podStartSLOduration=3.6940398070000002 podStartE2EDuration="10.441109873s" podCreationTimestamp="2025-11-22 07:17:41 +0000 UTC" firstStartedPulling="2025-11-22 07:17:43.52467116 +0000 UTC m=+462.365293786" lastFinishedPulling="2025-11-22 07:17:50.271741226 +0000 UTC m=+469.112363852" observedRunningTime="2025-11-22 07:17:51.439255421 +0000 UTC m=+470.279878037" watchObservedRunningTime="2025-11-22 07:17:51.441109873 +0000 UTC m=+470.281732499" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.465694 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" podStartSLOduration=4.357189083 podStartE2EDuration="7.46565979s" podCreationTimestamp="2025-11-22 07:17:44 +0000 UTC" firstStartedPulling="2025-11-22 07:17:47.110222982 +0000 UTC m=+465.950845608" lastFinishedPulling="2025-11-22 07:17:50.218693689 +0000 UTC m=+469.059316315" observedRunningTime="2025-11-22 07:17:51.458338042 +0000 UTC m=+470.298960678" watchObservedRunningTime="2025-11-22 07:17:51.46565979 +0000 UTC m=+470.306282416" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.486908 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7466549cc8-q8xnn" podStartSLOduration=3.630455322 podStartE2EDuration="6.486880553s" podCreationTimestamp="2025-11-22 07:17:45 +0000 UTC" firstStartedPulling="2025-11-22 07:17:47.406203125 +0000 UTC m=+466.246825741" lastFinishedPulling="2025-11-22 07:17:50.262628346 +0000 UTC m=+469.103250972" observedRunningTime="2025-11-22 07:17:51.477523877 +0000 UTC m=+470.318146503" watchObservedRunningTime="2025-11-22 07:17:51.486880553 +0000 UTC m=+470.327503179" Nov 22 07:17:51 crc kubenswrapper[4853]: I1122 07:17:51.513060 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.3152268559999998 podStartE2EDuration="11.513039276s" podCreationTimestamp="2025-11-22 07:17:40 +0000 UTC" firstStartedPulling="2025-11-22 07:17:41.18365449 +0000 UTC m=+460.024277116" lastFinishedPulling="2025-11-22 07:17:50.38146691 +0000 UTC m=+469.222089536" observedRunningTime="2025-11-22 07:17:51.512316075 +0000 UTC m=+470.352938701" watchObservedRunningTime="2025-11-22 07:17:51.513039276 +0000 UTC m=+470.353661902" Nov 22 07:17:52 crc kubenswrapper[4853]: I1122 07:17:52.458470 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-57797c7b65-9s8jq" Nov 22 07:17:53 crc kubenswrapper[4853]: I1122 07:17:53.446862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"0b51033557e679fc29529e6c04d4055fe19b1de276aa95483aeff8ff43227392"} Nov 22 07:17:53 crc kubenswrapper[4853]: I1122 07:17:53.447200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"1a78fd39f02105805689d1112761697b593180ca98fb519d19a706552be0adf3"} Nov 22 07:17:53 crc kubenswrapper[4853]: I1122 07:17:53.447215 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"9a0ef0905e7d72c6d40e2371c2aec633a9432170db96db44014049d8ed2681c4"} Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.465208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"65a5a68cfd655c5176161e341780fe0421258202bd05e0922421c61791e81c33"} Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.465540 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"c08f6721dc806d978be24bdfec2abbf7cd778224a3df46fc63afb4b2fb70deca"} Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.465553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"aa25b342-38ae-4493-8129-710611d886fa","Type":"ContainerStarted","Data":"544f183a5d81ceea3431a0a69cf2c1e0e14bd4311f2abab9a5e030fa2412ad19"} Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.509735 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.073657853 podStartE2EDuration="9.509711989s" podCreationTimestamp="2025-11-22 07:17:45 +0000 UTC" firstStartedPulling="2025-11-22 07:17:48.384114037 +0000 UTC m=+467.224736683" lastFinishedPulling="2025-11-22 07:17:52.820168183 +0000 UTC m=+471.660790819" observedRunningTime="2025-11-22 07:17:54.499702225 +0000 UTC m=+473.340324881" watchObservedRunningTime="2025-11-22 07:17:54.509711989 +0000 UTC m=+473.350334625" Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.718095 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.718162 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:54 crc kubenswrapper[4853]: I1122 07:17:54.723624 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:55 crc kubenswrapper[4853]: I1122 07:17:55.478079 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:17:55 crc kubenswrapper[4853]: I1122 07:17:55.569856 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:17:56 crc kubenswrapper[4853]: I1122 07:17:56.221066 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:18:05 crc kubenswrapper[4853]: I1122 07:18:05.254853 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:18:05 crc kubenswrapper[4853]: I1122 07:18:05.255879 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.366541 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" podUID="541af556-5dce-45ed-bf9e-f6faf6b146ca" containerName="registry" containerID="cri-o://f4f39b93f94d6246c83cd61360244d28ad7d33d8c88382c36531634d21d2027c" gracePeriod=30 Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.768986 4853 generic.go:334] "Generic (PLEG): container finished" podID="541af556-5dce-45ed-bf9e-f6faf6b146ca" containerID="f4f39b93f94d6246c83cd61360244d28ad7d33d8c88382c36531634d21d2027c" exitCode=0 Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.769062 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" event={"ID":"541af556-5dce-45ed-bf9e-f6faf6b146ca","Type":"ContainerDied","Data":"f4f39b93f94d6246c83cd61360244d28ad7d33d8c88382c36531634d21d2027c"} Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.769359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" event={"ID":"541af556-5dce-45ed-bf9e-f6faf6b146ca","Type":"ContainerDied","Data":"90a058e804145bfe1168c745466a923a062ec5370e5ed9af59db6a62a529e8ae"} Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.769381 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a058e804145bfe1168c745466a923a062ec5370e5ed9af59db6a62a529e8ae" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.777286 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903208 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903602 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903652 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903839 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2rbq\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.903979 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.904056 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.904154 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.904256 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"541af556-5dce-45ed-bf9e-f6faf6b146ca\" (UID: \"541af556-5dce-45ed-bf9e-f6faf6b146ca\") " Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.904572 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.904954 4853 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.913043 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.918265 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.918740 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.919236 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq" (OuterVolumeSpecName: "kube-api-access-w2rbq") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "kube-api-access-w2rbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.928328 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:18:14 crc kubenswrapper[4853]: I1122 07:18:14.937484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "541af556-5dce-45ed-bf9e-f6faf6b146ca" (UID: "541af556-5dce-45ed-bf9e-f6faf6b146ca"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006700 4853 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006743 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2rbq\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-kube-api-access-w2rbq\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006774 4853 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/541af556-5dce-45ed-bf9e-f6faf6b146ca-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006786 4853 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/541af556-5dce-45ed-bf9e-f6faf6b146ca-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006797 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/541af556-5dce-45ed-bf9e-f6faf6b146ca-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.006808 4853 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/541af556-5dce-45ed-bf9e-f6faf6b146ca-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.776985 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-2p6qj" Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.795795 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:18:15 crc kubenswrapper[4853]: I1122 07:18:15.802353 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-2p6qj"] Nov 22 07:18:17 crc kubenswrapper[4853]: I1122 07:18:17.757652 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="541af556-5dce-45ed-bf9e-f6faf6b146ca" path="/var/lib/kubelet/pods/541af556-5dce-45ed-bf9e-f6faf6b146ca/volumes" Nov 22 07:18:20 crc kubenswrapper[4853]: I1122 07:18:20.667555 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-5nds5" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerName="console" containerID="cri-o://91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7" gracePeriod=15 Nov 22 07:18:21 crc kubenswrapper[4853]: E1122 07:18:21.017303 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:21 crc kubenswrapper[4853]: E1122 07:18:21.017855 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.556883 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5nds5_6d3c61d5-518d-443e-beb3-a0bf27a07be4/console/0.log" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.557390 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725319 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725467 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725539 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725580 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725601 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhxxn\" (UniqueName: \"kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725625 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.725660 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle\") pod \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\" (UID: \"6d3c61d5-518d-443e-beb3-a0bf27a07be4\") " Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.726691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.726831 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.726851 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca" (OuterVolumeSpecName: "service-ca") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.726863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config" (OuterVolumeSpecName: "console-config") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.734363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn" (OuterVolumeSpecName: "kube-api-access-qhxxn") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "kube-api-access-qhxxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.735266 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.736221 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6d3c61d5-518d-443e-beb3-a0bf27a07be4" (UID: "6d3c61d5-518d-443e-beb3-a0bf27a07be4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819603 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-5nds5_6d3c61d5-518d-443e-beb3-a0bf27a07be4/console/0.log" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819710 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerID="91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7" exitCode=2 Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5nds5" event={"ID":"6d3c61d5-518d-443e-beb3-a0bf27a07be4","Type":"ContainerDied","Data":"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7"} Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819850 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-5nds5" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819884 4853 scope.go:117] "RemoveContainer" containerID="91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.819869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-5nds5" event={"ID":"6d3c61d5-518d-443e-beb3-a0bf27a07be4","Type":"ContainerDied","Data":"b17a7802bc0213ba96a5bca0eb6b8a0c92a507db5071f8650d60e7b03c987d3a"} Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827238 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827266 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827277 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827289 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhxxn\" (UniqueName: \"kubernetes.io/projected/6d3c61d5-518d-443e-beb3-a0bf27a07be4-kube-api-access-qhxxn\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827300 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827310 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6d3c61d5-518d-443e-beb3-a0bf27a07be4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.827320 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d3c61d5-518d-443e-beb3-a0bf27a07be4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.841195 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.842815 4853 scope.go:117] "RemoveContainer" containerID="91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7" Nov 22 07:18:21 crc kubenswrapper[4853]: E1122 07:18:21.843543 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7\": container with ID starting with 91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7 not found: ID does not exist" containerID="91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.843583 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7"} err="failed to get container status \"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7\": rpc error: code = NotFound desc = could not find container \"91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7\": container with ID starting with 91182026ad1b4e92eaba8dd93f41008201a720fdd63ef719be0d56786dbe22d7 not found: ID does not exist" Nov 22 07:18:21 crc kubenswrapper[4853]: I1122 07:18:21.846398 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-5nds5"] Nov 22 07:18:23 crc kubenswrapper[4853]: I1122 07:18:23.759582 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" path="/var/lib/kubelet/pods/6d3c61d5-518d-443e-beb3-a0bf27a07be4/volumes" Nov 22 07:18:25 crc kubenswrapper[4853]: I1122 07:18:25.261966 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:18:25 crc kubenswrapper[4853]: I1122 07:18:25.267917 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-788dbc4c78-5xql9" Nov 22 07:18:31 crc kubenswrapper[4853]: E1122 07:18:31.164676 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:35 crc kubenswrapper[4853]: E1122 07:18:35.933211 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:41 crc kubenswrapper[4853]: E1122 07:18:41.344442 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:46 crc kubenswrapper[4853]: I1122 07:18:46.221943 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:18:46 crc kubenswrapper[4853]: I1122 07:18:46.268665 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:18:47 crc kubenswrapper[4853]: I1122 07:18:47.022041 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 22 07:18:48 crc kubenswrapper[4853]: E1122 07:18:48.198199 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:48 crc kubenswrapper[4853]: E1122 07:18:48.198310 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:50 crc kubenswrapper[4853]: E1122 07:18:50.933759 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:18:51 crc kubenswrapper[4853]: E1122 07:18:51.369868 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:19:01 crc kubenswrapper[4853]: E1122 07:19:01.526691 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:19:05 crc kubenswrapper[4853]: E1122 07:19:05.934101 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:19:11 crc kubenswrapper[4853]: E1122 07:19:11.687923 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod541af556_5dce_45ed_bf9e_f6faf6b146ca.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.545324 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:19:38 crc kubenswrapper[4853]: E1122 07:19:38.546666 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="541af556-5dce-45ed-bf9e-f6faf6b146ca" containerName="registry" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.546692 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="541af556-5dce-45ed-bf9e-f6faf6b146ca" containerName="registry" Nov 22 07:19:38 crc kubenswrapper[4853]: E1122 07:19:38.546714 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerName="console" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.546725 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerName="console" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.546916 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3c61d5-518d-443e-beb3-a0bf27a07be4" containerName="console" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.546945 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="541af556-5dce-45ed-bf9e-f6faf6b146ca" containerName="registry" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.547687 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.567503 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnsmn\" (UniqueName: \"kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612143 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.612450 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713321 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnsmn\" (UniqueName: \"kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713412 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.713454 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.715234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.715293 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.715315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.715904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.723020 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.726897 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.740249 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnsmn\" (UniqueName: \"kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn\") pod \"console-fd7cb74df-54pkh\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:38 crc kubenswrapper[4853]: I1122 07:19:38.872187 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:39 crc kubenswrapper[4853]: I1122 07:19:39.090175 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:19:39 crc kubenswrapper[4853]: I1122 07:19:39.320197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-54pkh" event={"ID":"770673d6-8086-419e-82fd-275359586fc8","Type":"ContainerStarted","Data":"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480"} Nov 22 07:19:39 crc kubenswrapper[4853]: I1122 07:19:39.321023 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-54pkh" event={"ID":"770673d6-8086-419e-82fd-275359586fc8","Type":"ContainerStarted","Data":"fe3d4ce43dad4080bcac442739372a6a224a4289ef219acaa37b68ca39755831"} Nov 22 07:19:39 crc kubenswrapper[4853]: I1122 07:19:39.339805 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fd7cb74df-54pkh" podStartSLOduration=1.339783097 podStartE2EDuration="1.339783097s" podCreationTimestamp="2025-11-22 07:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:19:39.338687417 +0000 UTC m=+578.179310053" watchObservedRunningTime="2025-11-22 07:19:39.339783097 +0000 UTC m=+578.180405723" Nov 22 07:19:48 crc kubenswrapper[4853]: I1122 07:19:48.872986 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:48 crc kubenswrapper[4853]: I1122 07:19:48.873853 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:48 crc kubenswrapper[4853]: I1122 07:19:48.878124 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:49 crc kubenswrapper[4853]: I1122 07:19:49.390534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:19:49 crc kubenswrapper[4853]: I1122 07:19:49.453312 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:20:01 crc kubenswrapper[4853]: I1122 07:20:01.297604 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:20:01 crc kubenswrapper[4853]: I1122 07:20:01.298363 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:20:14 crc kubenswrapper[4853]: I1122 07:20:14.514517 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75447c4646-f42jp" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerName="console" containerID="cri-o://117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80" gracePeriod=15 Nov 22 07:20:14 crc kubenswrapper[4853]: I1122 07:20:14.719435 4853 patch_prober.go:28] interesting pod/console-75447c4646-f42jp container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/health\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Nov 22 07:20:14 crc kubenswrapper[4853]: I1122 07:20:14.719533 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-75447c4646-f42jp" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerName="console" probeResult="failure" output="Get \"https://10.217.0.70:8443/health\": dial tcp 10.217.0.70:8443: connect: connection refused" Nov 22 07:20:14 crc kubenswrapper[4853]: I1122 07:20:14.900908 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75447c4646-f42jp_d7614e6a-b9fe-4e08-9a68-28ea9b652739/console/0.log" Nov 22 07:20:14 crc kubenswrapper[4853]: I1122 07:20:14.901352 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.017826 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.017893 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.017935 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npql6\" (UniqueName: \"kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.017966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.018053 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.018094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.018148 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle\") pod \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\" (UID: \"d7614e6a-b9fe-4e08-9a68-28ea9b652739\") " Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.019161 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.019156 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.019250 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config" (OuterVolumeSpecName: "console-config") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.019696 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.024863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.025077 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6" (OuterVolumeSpecName: "kube-api-access-npql6") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "kube-api-access-npql6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.026119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d7614e6a-b9fe-4e08-9a68-28ea9b652739" (UID: "d7614e6a-b9fe-4e08-9a68-28ea9b652739"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120444 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npql6\" (UniqueName: \"kubernetes.io/projected/d7614e6a-b9fe-4e08-9a68-28ea9b652739-kube-api-access-npql6\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120513 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120525 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120534 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120544 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120553 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7614e6a-b9fe-4e08-9a68-28ea9b652739-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.120561 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7614e6a-b9fe-4e08-9a68-28ea9b652739-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603140 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75447c4646-f42jp_d7614e6a-b9fe-4e08-9a68-28ea9b652739/console/0.log" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603211 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerID="117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80" exitCode=2 Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75447c4646-f42jp" event={"ID":"d7614e6a-b9fe-4e08-9a68-28ea9b652739","Type":"ContainerDied","Data":"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80"} Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603291 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75447c4646-f42jp" event={"ID":"d7614e6a-b9fe-4e08-9a68-28ea9b652739","Type":"ContainerDied","Data":"030bb7a179f60fc6a829c962b6327b855e100cab7005a353f3b93a4d01386227"} Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603312 4853 scope.go:117] "RemoveContainer" containerID="117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.603370 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75447c4646-f42jp" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.624265 4853 scope.go:117] "RemoveContainer" containerID="117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80" Nov 22 07:20:15 crc kubenswrapper[4853]: E1122 07:20:15.626127 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80\": container with ID starting with 117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80 not found: ID does not exist" containerID="117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.626215 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80"} err="failed to get container status \"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80\": rpc error: code = NotFound desc = could not find container \"117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80\": container with ID starting with 117c7b90ca358125f58cab4e094555650d807441ba8ba28e5ab4ff0f469dbb80 not found: ID does not exist" Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.638552 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.641657 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75447c4646-f42jp"] Nov 22 07:20:15 crc kubenswrapper[4853]: I1122 07:20:15.757186 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" path="/var/lib/kubelet/pods/d7614e6a-b9fe-4e08-9a68-28ea9b652739/volumes" Nov 22 07:20:16 crc kubenswrapper[4853]: I1122 07:20:16.014207 4853 scope.go:117] "RemoveContainer" containerID="f4f39b93f94d6246c83cd61360244d28ad7d33d8c88382c36531634d21d2027c" Nov 22 07:20:31 crc kubenswrapper[4853]: I1122 07:20:31.297809 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:20:31 crc kubenswrapper[4853]: I1122 07:20:31.298665 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.298295 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.299997 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.300092 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.300919 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.301003 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684" gracePeriod=600 Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.915522 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684" exitCode=0 Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.915800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684"} Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.916399 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7"} Nov 22 07:21:01 crc kubenswrapper[4853]: I1122 07:21:01.916440 4853 scope.go:117] "RemoveContainer" containerID="1534e0876d5be06d823b8de17b8b10504cf7555aab496f4dc301e85f1b2d8572" Nov 22 07:23:01 crc kubenswrapper[4853]: I1122 07:23:01.298082 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:23:01 crc kubenswrapper[4853]: I1122 07:23:01.299327 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.081768 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.082785 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" containerID="cri-o://f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740" gracePeriod=30 Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.182306 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.182578 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" podUID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" containerName="route-controller-manager" containerID="cri-o://5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40" gracePeriod=30 Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.473920 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.486928 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca\") pod \"90b00b61-4e40-4e08-b164-643608e91dd0\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.487007 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config\") pod \"90b00b61-4e40-4e08-b164-643608e91dd0\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.487097 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccw22\" (UniqueName: \"kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22\") pod \"90b00b61-4e40-4e08-b164-643608e91dd0\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.487171 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert\") pod \"90b00b61-4e40-4e08-b164-643608e91dd0\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.487196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles\") pod \"90b00b61-4e40-4e08-b164-643608e91dd0\" (UID: \"90b00b61-4e40-4e08-b164-643608e91dd0\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.487996 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca" (OuterVolumeSpecName: "client-ca") pod "90b00b61-4e40-4e08-b164-643608e91dd0" (UID: "90b00b61-4e40-4e08-b164-643608e91dd0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.488253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config" (OuterVolumeSpecName: "config") pod "90b00b61-4e40-4e08-b164-643608e91dd0" (UID: "90b00b61-4e40-4e08-b164-643608e91dd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.488964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "90b00b61-4e40-4e08-b164-643608e91dd0" (UID: "90b00b61-4e40-4e08-b164-643608e91dd0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.507291 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22" (OuterVolumeSpecName: "kube-api-access-ccw22") pod "90b00b61-4e40-4e08-b164-643608e91dd0" (UID: "90b00b61-4e40-4e08-b164-643608e91dd0"). InnerVolumeSpecName "kube-api-access-ccw22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.507343 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "90b00b61-4e40-4e08-b164-643608e91dd0" (UID: "90b00b61-4e40-4e08-b164-643608e91dd0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.541764 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config\") pod \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588274 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq7n2\" (UniqueName: \"kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2\") pod \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588385 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca\") pod \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588424 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert\") pod \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\" (UID: \"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04\") " Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588632 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90b00b61-4e40-4e08-b164-643608e91dd0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588647 4853 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588662 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588684 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90b00b61-4e40-4e08-b164-643608e91dd0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.588695 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccw22\" (UniqueName: \"kubernetes.io/projected/90b00b61-4e40-4e08-b164-643608e91dd0-kube-api-access-ccw22\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.589534 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config" (OuterVolumeSpecName: "config") pod "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" (UID: "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.590047 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca" (OuterVolumeSpecName: "client-ca") pod "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" (UID: "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.592137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" (UID: "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.592122 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2" (OuterVolumeSpecName: "kube-api-access-sq7n2") pod "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" (UID: "0fdfc9f2-e63f-48f4-89ad-94ef8b642d04"). InnerVolumeSpecName "kube-api-access-sq7n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.689901 4853 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.689945 4853 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.689956 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.689965 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq7n2\" (UniqueName: \"kubernetes.io/projected/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04-kube-api-access-sq7n2\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.843204 4853 generic.go:334] "Generic (PLEG): container finished" podID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" containerID="5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40" exitCode=0 Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.843257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" event={"ID":"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04","Type":"ContainerDied","Data":"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40"} Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.843285 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.843311 4853 scope.go:117] "RemoveContainer" containerID="5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.843299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64" event={"ID":"0fdfc9f2-e63f-48f4-89ad-94ef8b642d04","Type":"ContainerDied","Data":"d1a570d7d0fcbd082657ae94277267c71c703e315462d5203da50e815875681f"} Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.845082 4853 generic.go:334] "Generic (PLEG): container finished" podID="90b00b61-4e40-4e08-b164-643608e91dd0" containerID="f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740" exitCode=0 Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.845136 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" event={"ID":"90b00b61-4e40-4e08-b164-643608e91dd0","Type":"ContainerDied","Data":"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740"} Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.845181 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" event={"ID":"90b00b61-4e40-4e08-b164-643608e91dd0","Type":"ContainerDied","Data":"4246803e0ef4ed600ec0927d6385f1e8e217847eef221a71eb58cb0a20a28737"} Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.845254 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9qkgc" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.861976 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.867345 4853 scope.go:117] "RemoveContainer" containerID="5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40" Nov 22 07:23:21 crc kubenswrapper[4853]: E1122 07:23:21.869267 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40\": container with ID starting with 5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40 not found: ID does not exist" containerID="5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.869311 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40"} err="failed to get container status \"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40\": rpc error: code = NotFound desc = could not find container \"5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40\": container with ID starting with 5f7a813bb766decb026fd11f3854aebab8f531556a25eb4638b2831bee31cc40 not found: ID does not exist" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.869338 4853 scope.go:117] "RemoveContainer" containerID="f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.871491 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9qkgc"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.875162 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.878037 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5l64"] Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.885454 4853 scope.go:117] "RemoveContainer" containerID="f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740" Nov 22 07:23:21 crc kubenswrapper[4853]: E1122 07:23:21.885983 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740\": container with ID starting with f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740 not found: ID does not exist" containerID="f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740" Nov 22 07:23:21 crc kubenswrapper[4853]: I1122 07:23:21.886027 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740"} err="failed to get container status \"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740\": rpc error: code = NotFound desc = could not find container \"f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740\": container with ID starting with f066e8531d48ef7201244030a6fd47cd8dc984d8c01d05e471ed6a0c4bfa0740 not found: ID does not exist" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503369 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl"] Nov 22 07:23:22 crc kubenswrapper[4853]: E1122 07:23:22.503628 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" containerName="route-controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503640 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" containerName="route-controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: E1122 07:23:22.503657 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503664 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: E1122 07:23:22.503685 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerName="console" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503693 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerName="console" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503844 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" containerName="controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503855 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" containerName="route-controller-manager" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.503865 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7614e6a-b9fe-4e08-9a68-28ea9b652739" containerName="console" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.504305 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.506323 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.506388 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.506499 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.507016 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.507158 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.516646 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl"] Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.520855 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.701970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d82f312-242f-401e-b9fe-549ac711a878-serving-cert\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.702034 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-client-ca\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.702076 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89qp\" (UniqueName: \"kubernetes.io/projected/9d82f312-242f-401e-b9fe-549ac711a878-kube-api-access-d89qp\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.702273 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-config\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.802989 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d82f312-242f-401e-b9fe-549ac711a878-serving-cert\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.803038 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-client-ca\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.803071 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89qp\" (UniqueName: \"kubernetes.io/projected/9d82f312-242f-401e-b9fe-549ac711a878-kube-api-access-d89qp\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.803111 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-config\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.804532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-client-ca\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.805130 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d82f312-242f-401e-b9fe-549ac711a878-config\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.808297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d82f312-242f-401e-b9fe-549ac711a878-serving-cert\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.833223 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89qp\" (UniqueName: \"kubernetes.io/projected/9d82f312-242f-401e-b9fe-549ac711a878-kube-api-access-d89qp\") pod \"route-controller-manager-8bc5fc695-85bwl\" (UID: \"9d82f312-242f-401e-b9fe-549ac711a878\") " pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.906845 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b48b96b9-wpnqg"] Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.907697 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911019 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911073 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911101 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911243 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911640 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.911929 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.918700 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:23:22 crc kubenswrapper[4853]: I1122 07:23:22.949691 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b48b96b9-wpnqg"] Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.005008 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-config\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.005056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndv4\" (UniqueName: \"kubernetes.io/projected/336d7580-c86d-457f-9ba9-f57ef9818da4-kube-api-access-qndv4\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.005109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-client-ca\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.005134 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-proxy-ca-bundles\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.005168 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336d7580-c86d-457f-9ba9-f57ef9818da4-serving-cert\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.106051 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-client-ca\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.106105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-proxy-ca-bundles\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.106142 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336d7580-c86d-457f-9ba9-f57ef9818da4-serving-cert\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.106166 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-config\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.106209 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qndv4\" (UniqueName: \"kubernetes.io/projected/336d7580-c86d-457f-9ba9-f57ef9818da4-kube-api-access-qndv4\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.107183 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-client-ca\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.107521 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-config\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.107783 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/336d7580-c86d-457f-9ba9-f57ef9818da4-proxy-ca-bundles\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.112435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/336d7580-c86d-457f-9ba9-f57ef9818da4-serving-cert\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.123160 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.128458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qndv4\" (UniqueName: \"kubernetes.io/projected/336d7580-c86d-457f-9ba9-f57ef9818da4-kube-api-access-qndv4\") pod \"controller-manager-57b48b96b9-wpnqg\" (UID: \"336d7580-c86d-457f-9ba9-f57ef9818da4\") " pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.235066 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.366718 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl"] Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.701813 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b48b96b9-wpnqg"] Nov 22 07:23:23 crc kubenswrapper[4853]: W1122 07:23:23.709824 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336d7580_c86d_457f_9ba9_f57ef9818da4.slice/crio-9343b44cf43c20cefaeb9a24ed3812135dcde433400d93e05d29ad900482112d WatchSource:0}: Error finding container 9343b44cf43c20cefaeb9a24ed3812135dcde433400d93e05d29ad900482112d: Status 404 returned error can't find the container with id 9343b44cf43c20cefaeb9a24ed3812135dcde433400d93e05d29ad900482112d Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.756844 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdfc9f2-e63f-48f4-89ad-94ef8b642d04" path="/var/lib/kubelet/pods/0fdfc9f2-e63f-48f4-89ad-94ef8b642d04/volumes" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.758051 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b00b61-4e40-4e08-b164-643608e91dd0" path="/var/lib/kubelet/pods/90b00b61-4e40-4e08-b164-643608e91dd0/volumes" Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.862066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" event={"ID":"9d82f312-242f-401e-b9fe-549ac711a878","Type":"ContainerStarted","Data":"8f7388f2643bd32bca80fa3ce662dbf6f4131992a99907a13f55e5be92542140"} Nov 22 07:23:23 crc kubenswrapper[4853]: I1122 07:23:23.864067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" event={"ID":"336d7580-c86d-457f-9ba9-f57ef9818da4","Type":"ContainerStarted","Data":"9343b44cf43c20cefaeb9a24ed3812135dcde433400d93e05d29ad900482112d"} Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.872310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" event={"ID":"336d7580-c86d-457f-9ba9-f57ef9818da4","Type":"ContainerStarted","Data":"626c0c77a1827fd9d540a9c880643f4c8949823940610d289757435771f25edc"} Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.873659 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.877047 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" event={"ID":"9d82f312-242f-401e-b9fe-549ac711a878","Type":"ContainerStarted","Data":"ae6348017c72a8d21c53ceea1ef4759e3d3adb384448c4e3cc40c5e0a5c49ac9"} Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.877264 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.881446 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.908737 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b48b96b9-wpnqg" podStartSLOduration=3.908710129 podStartE2EDuration="3.908710129s" podCreationTimestamp="2025-11-22 07:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:24.90611985 +0000 UTC m=+803.746742496" watchObservedRunningTime="2025-11-22 07:23:24.908710129 +0000 UTC m=+803.749332755" Nov 22 07:23:24 crc kubenswrapper[4853]: I1122 07:23:24.995611 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" podStartSLOduration=2.995592274 podStartE2EDuration="2.995592274s" podCreationTimestamp="2025-11-22 07:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:24.99239904 +0000 UTC m=+803.833021676" watchObservedRunningTime="2025-11-22 07:23:24.995592274 +0000 UTC m=+803.836214900" Nov 22 07:23:25 crc kubenswrapper[4853]: I1122 07:23:25.156881 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8bc5fc695-85bwl" Nov 22 07:23:31 crc kubenswrapper[4853]: I1122 07:23:31.297893 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:23:31 crc kubenswrapper[4853]: I1122 07:23:31.298364 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:23:34 crc kubenswrapper[4853]: I1122 07:23:34.624270 4853 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:24:01 crc kubenswrapper[4853]: I1122 07:24:01.297495 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:24:01 crc kubenswrapper[4853]: I1122 07:24:01.298435 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:24:01 crc kubenswrapper[4853]: I1122 07:24:01.298524 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:24:01 crc kubenswrapper[4853]: I1122 07:24:01.299627 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:24:01 crc kubenswrapper[4853]: I1122 07:24:01.299700 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7" gracePeriod=600 Nov 22 07:24:02 crc kubenswrapper[4853]: I1122 07:24:02.130519 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7" exitCode=0 Nov 22 07:24:02 crc kubenswrapper[4853]: I1122 07:24:02.130593 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7"} Nov 22 07:24:02 crc kubenswrapper[4853]: I1122 07:24:02.131031 4853 scope.go:117] "RemoveContainer" containerID="7523a60199034cbb4e53ad78b590aa431d7e2d4c9ba4923e7f266cfff6902684" Nov 22 07:24:03 crc kubenswrapper[4853]: I1122 07:24:03.139718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8"} Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.557457 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h"] Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.559819 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.562771 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.570513 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h"] Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.650229 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.650546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.650616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh8sw\" (UniqueName: \"kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.752663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.752743 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.752788 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh8sw\" (UniqueName: \"kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.753657 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.753666 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.774981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh8sw\" (UniqueName: \"kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:17 crc kubenswrapper[4853]: I1122 07:24:17.878381 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:18 crc kubenswrapper[4853]: I1122 07:24:18.345256 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h"] Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.258862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerStarted","Data":"38e7496ed3fbdb49b8cc5a67bf52c6aaa2fcaa198bab66b4fb7e5f229ae3d9b0"} Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.259657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerStarted","Data":"89f7a6853b0abe7b14e67180d921556fc467931420aa204cff2a6aadd47ef050"} Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.894497 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.896395 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.909931 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.996709 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.996776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xg8b\" (UniqueName: \"kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:19 crc kubenswrapper[4853]: I1122 07:24:19.996843 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.098082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.098156 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xg8b\" (UniqueName: \"kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.098195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.098783 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.098888 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.125184 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xg8b\" (UniqueName: \"kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b\") pod \"redhat-operators-hcc5b\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.212516 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.267510 4853 generic.go:334] "Generic (PLEG): container finished" podID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerID="38e7496ed3fbdb49b8cc5a67bf52c6aaa2fcaa198bab66b4fb7e5f229ae3d9b0" exitCode=0 Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.267617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerDied","Data":"38e7496ed3fbdb49b8cc5a67bf52c6aaa2fcaa198bab66b4fb7e5f229ae3d9b0"} Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.270423 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:24:20 crc kubenswrapper[4853]: I1122 07:24:20.700085 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:21 crc kubenswrapper[4853]: I1122 07:24:21.275768 4853 generic.go:334] "Generic (PLEG): container finished" podID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerID="8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c" exitCode=0 Nov 22 07:24:21 crc kubenswrapper[4853]: I1122 07:24:21.275910 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerDied","Data":"8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c"} Nov 22 07:24:21 crc kubenswrapper[4853]: I1122 07:24:21.276359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerStarted","Data":"afe843a9f9f0ee9d3661d87eb649ed42446935c3b755deb11313d9f2c907e261"} Nov 22 07:24:22 crc kubenswrapper[4853]: I1122 07:24:22.287415 4853 generic.go:334] "Generic (PLEG): container finished" podID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerID="3e8c90f3736b8525043e34e65dc53b8c971d3ecec14dd396624d8061f6e988df" exitCode=0 Nov 22 07:24:22 crc kubenswrapper[4853]: I1122 07:24:22.287487 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerDied","Data":"3e8c90f3736b8525043e34e65dc53b8c971d3ecec14dd396624d8061f6e988df"} Nov 22 07:24:22 crc kubenswrapper[4853]: I1122 07:24:22.290080 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerStarted","Data":"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4"} Nov 22 07:24:23 crc kubenswrapper[4853]: I1122 07:24:23.300161 4853 generic.go:334] "Generic (PLEG): container finished" podID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerID="fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4" exitCode=0 Nov 22 07:24:23 crc kubenswrapper[4853]: I1122 07:24:23.300740 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerDied","Data":"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4"} Nov 22 07:24:23 crc kubenswrapper[4853]: I1122 07:24:23.309798 4853 generic.go:334] "Generic (PLEG): container finished" podID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerID="0b3495b00931d01189aee0f7318d87b5eeca7681295893cb1e9e3041fa23bb53" exitCode=0 Nov 22 07:24:23 crc kubenswrapper[4853]: I1122 07:24:23.309846 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerDied","Data":"0b3495b00931d01189aee0f7318d87b5eeca7681295893cb1e9e3041fa23bb53"} Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.319343 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerStarted","Data":"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45"} Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.339073 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hcc5b" podStartSLOduration=2.839976509 podStartE2EDuration="5.33905381s" podCreationTimestamp="2025-11-22 07:24:19 +0000 UTC" firstStartedPulling="2025-11-22 07:24:21.277704037 +0000 UTC m=+860.118326663" lastFinishedPulling="2025-11-22 07:24:23.776781348 +0000 UTC m=+862.617403964" observedRunningTime="2025-11-22 07:24:24.336242302 +0000 UTC m=+863.176864948" watchObservedRunningTime="2025-11-22 07:24:24.33905381 +0000 UTC m=+863.179676436" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.608731 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.674877 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util\") pod \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.674962 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle\") pod \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.675059 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hh8sw\" (UniqueName: \"kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw\") pod \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\" (UID: \"683f3f0d-d7fe-42b9-8deb-2358f0f8d572\") " Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.677785 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle" (OuterVolumeSpecName: "bundle") pod "683f3f0d-d7fe-42b9-8deb-2358f0f8d572" (UID: "683f3f0d-d7fe-42b9-8deb-2358f0f8d572"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.685764 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util" (OuterVolumeSpecName: "util") pod "683f3f0d-d7fe-42b9-8deb-2358f0f8d572" (UID: "683f3f0d-d7fe-42b9-8deb-2358f0f8d572"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.689082 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw" (OuterVolumeSpecName: "kube-api-access-hh8sw") pod "683f3f0d-d7fe-42b9-8deb-2358f0f8d572" (UID: "683f3f0d-d7fe-42b9-8deb-2358f0f8d572"). InnerVolumeSpecName "kube-api-access-hh8sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.777461 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.777511 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hh8sw\" (UniqueName: \"kubernetes.io/projected/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-kube-api-access-hh8sw\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:24 crc kubenswrapper[4853]: I1122 07:24:24.777527 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/683f3f0d-d7fe-42b9-8deb-2358f0f8d572-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:25 crc kubenswrapper[4853]: I1122 07:24:25.327877 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" Nov 22 07:24:25 crc kubenswrapper[4853]: I1122 07:24:25.327862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h" event={"ID":"683f3f0d-d7fe-42b9-8deb-2358f0f8d572","Type":"ContainerDied","Data":"89f7a6853b0abe7b14e67180d921556fc467931420aa204cff2a6aadd47ef050"} Nov 22 07:24:25 crc kubenswrapper[4853]: I1122 07:24:25.327951 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89f7a6853b0abe7b14e67180d921556fc467931420aa204cff2a6aadd47ef050" Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.470069 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqtsz"] Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472296 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-controller" containerID="cri-o://1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472356 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="nbdb" containerID="cri-o://34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472391 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-node" containerID="cri-o://959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472611 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="northd" containerID="cri-o://28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472593 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472623 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="sbdb" containerID="cri-o://902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.472630 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-acl-logging" containerID="cri-o://ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e" gracePeriod=30 Nov 22 07:24:28 crc kubenswrapper[4853]: I1122 07:24:28.549239 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" containerID="cri-o://d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" gracePeriod=30 Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.213366 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.213809 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.357301 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.358864 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-acl-logging/0.log" Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359664 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f" exitCode=0 Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359691 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f" exitCode=0 Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359700 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e" exitCode=143 Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359723 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f"} Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f"} Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.359778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e"} Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.370255 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:30 crc kubenswrapper[4853]: I1122 07:24:30.512868 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.324934 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95 is running failed: container process not found" containerID="902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.324911 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f is running failed: container process not found" containerID="d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.324934 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d is running failed: container process not found" containerID="34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.325949 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f is running failed: container process not found" containerID="d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326073 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d is running failed: container process not found" containerID="34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326132 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95 is running failed: container process not found" containerID="902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326487 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d is running failed: container process not found" containerID="34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326525 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="nbdb" Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326502 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95 is running failed: container process not found" containerID="902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326722 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="sbdb" Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326924 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f is running failed: container process not found" containerID="d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Nov 22 07:24:31 crc kubenswrapper[4853]: E1122 07:24:31.326993 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.369672 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.373436 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-acl-logging/0.log" Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374026 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-controller/0.log" Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374523 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" exitCode=0 Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374562 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" exitCode=0 Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374575 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc" exitCode=143 Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374625 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95"} Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374709 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d"} Nov 22 07:24:31 crc kubenswrapper[4853]: I1122 07:24:31.374741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc"} Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.382792 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/2.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.384673 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/1.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.384788 4853 generic.go:334] "Generic (PLEG): container finished" podID="dbbe3472-17cc-48dd-8e46-393b00149429" containerID="fc6f64218dd1813a9ea5797839ef5c7d90de0212464d216ff37e24c2c36128fe" exitCode=2 Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.384910 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerDied","Data":"fc6f64218dd1813a9ea5797839ef5c7d90de0212464d216ff37e24c2c36128fe"} Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.384999 4853 scope.go:117] "RemoveContainer" containerID="338e7cc28de696b2bd165b4b7d21bb9029ee9f270cf1d43c65ea3934262f0d7d" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.385652 4853 scope.go:117] "RemoveContainer" containerID="fc6f64218dd1813a9ea5797839ef5c7d90de0212464d216ff37e24c2c36128fe" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.401723 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovnkube-controller/3.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.434126 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-acl-logging/0.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.440865 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-controller/0.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.441408 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" exitCode=0 Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.441440 4853 generic.go:334] "Generic (PLEG): container finished" podID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerID="28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df" exitCode=0 Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.441469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f"} Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.441507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df"} Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.531961 4853 scope.go:117] "RemoveContainer" containerID="6bc7f34ec100b4e47e45d01a9176361b33c988b6033ef25f9b662050421df6ef" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.594815 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-acl-logging/0.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.595333 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-controller/0.log" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.595870 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.649017 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.649432 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hcc5b" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="registry-server" containerID="cri-o://214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45" gracePeriod=2 Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.700326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.700725 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.700917 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701026 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701121 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701213 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89zdr\" (UniqueName: \"kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.700902 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.700971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701425 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701593 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log" (OuterVolumeSpecName: "node-log") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701646 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701501 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701910 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.701995 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702223 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702380 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702486 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703091 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702278 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702312 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash" (OuterVolumeSpecName: "host-slash") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703415 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702329 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.702789 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703289 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703349 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703694 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket" (OuterVolumeSpecName: "log-socket") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.703384 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.704184 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.704320 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.704438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch\") pod \"893f7e02-580a-4093-ab42-ea73ffffcfe6\" (UID: \"893f7e02-580a-4093-ab42-ea73ffffcfe6\") " Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705085 4853 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705200 4853 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705313 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705393 4853 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-node-log\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705467 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705533 4853 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705607 4853 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.705673 4853 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-slash\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706275 4853 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706351 4853 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706423 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706499 4853 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706573 4853 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706640 4853 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.706705 4853 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-log-socket\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.704445 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.704526 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.717537 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8b45j"] Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718571 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718603 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718614 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="nbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718621 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="nbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718640 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718648 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718654 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718660 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718667 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718672 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718683 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-node" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718690 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-node" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718696 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="northd" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718702 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="northd" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718711 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718717 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718726 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-acl-logging" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718735 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-acl-logging" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718763 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="sbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="sbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718781 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kubecfg-setup" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718789 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kubecfg-setup" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718799 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="util" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718805 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="util" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718811 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="pull" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718816 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="pull" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.718823 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="extract" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718829 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="extract" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718951 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="sbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718967 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718980 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.718992 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-node" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719006 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="683f3f0d-d7fe-42b9-8deb-2358f0f8d572" containerName="extract" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719018 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="northd" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719024 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-acl-logging" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719033 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="nbdb" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719041 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719049 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719056 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovn-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.719206 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719216 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: E1122 07:24:32.719224 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719231 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719342 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.719351 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" containerName="ovnkube-controller" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.721611 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.724071 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr" (OuterVolumeSpecName: "kube-api-access-89zdr") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "kube-api-access-89zdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.728409 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.753024 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "893f7e02-580a-4093-ab42-ea73ffffcfe6" (UID: "893f7e02-580a-4093-ab42-ea73ffffcfe6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809135 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-etc-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809254 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-slash\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-kubelet\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809345 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-log-socket\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809364 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-config\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809396 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzxj\" (UniqueName: \"kubernetes.io/projected/15690643-1b1a-4ced-9755-a8731ea4fd74-kube-api-access-tdzxj\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-netd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-systemd-units\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809488 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-ovn\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809513 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-env-overrides\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-node-log\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-bin\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809594 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809655 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-var-lib-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-script-lib\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-systemd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809727 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-netns\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15690643-1b1a-4ced-9755-a8731ea4fd74-ovn-node-metrics-cert\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809866 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/893f7e02-580a-4093-ab42-ea73ffffcfe6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809886 4853 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809897 4853 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809908 4853 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/893f7e02-580a-4093-ab42-ea73ffffcfe6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.809944 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89zdr\" (UniqueName: \"kubernetes.io/projected/893f7e02-580a-4093-ab42-ea73ffffcfe6-kube-api-access-89zdr\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911418 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911487 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-var-lib-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-script-lib\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911545 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-systemd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-netns\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911614 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15690643-1b1a-4ced-9755-a8731ea4fd74-ovn-node-metrics-cert\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911655 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-etc-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911682 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-slash\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911732 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-kubelet\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911865 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-log-socket\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-config\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911912 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzxj\" (UniqueName: \"kubernetes.io/projected/15690643-1b1a-4ced-9755-a8731ea4fd74-kube-api-access-tdzxj\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-netd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911959 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-systemd-units\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.911994 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-ovn\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-env-overrides\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912041 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-node-log\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912067 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-bin\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912182 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912232 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-ovn-kubernetes\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-var-lib-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912602 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-log-socket\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912644 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-etc-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-openvswitch\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912842 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-slash\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912688 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-systemd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-node-log\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-run-ovn\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-bin\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-cni-netd\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.913017 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-systemd-units\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.913101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-script-lib\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.913274 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-kubelet\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.912708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/15690643-1b1a-4ced-9755-a8731ea4fd74-host-run-netns\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.913733 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-env-overrides\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.913826 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/15690643-1b1a-4ced-9755-a8731ea4fd74-ovnkube-config\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.918422 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/15690643-1b1a-4ced-9755-a8731ea4fd74-ovn-node-metrics-cert\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:32 crc kubenswrapper[4853]: I1122 07:24:32.960587 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzxj\" (UniqueName: \"kubernetes.io/projected/15690643-1b1a-4ced-9755-a8731ea4fd74-kube-api-access-tdzxj\") pod \"ovnkube-node-8b45j\" (UID: \"15690643-1b1a-4ced-9755-a8731ea4fd74\") " pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.048306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.476358 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.485216 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-acl-logging/0.log" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.485698 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pqtsz_893f7e02-580a-4093-ab42-ea73ffffcfe6/ovn-controller/0.log" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.490357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" event={"ID":"893f7e02-580a-4093-ab42-ea73ffffcfe6","Type":"ContainerDied","Data":"2abee242a2ef10fc8ab292ffe4ace663b8351bea615a8b07e23d54fa800f7783"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.490418 4853 scope.go:117] "RemoveContainer" containerID="d52086eb6365bd264b8e88b5080611781a72c88a17061a9a7d7db1ce43507d3f" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.490593 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pqtsz" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.519536 4853 generic.go:334] "Generic (PLEG): container finished" podID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerID="214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45" exitCode=0 Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.519664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerDied","Data":"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.519713 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hcc5b" event={"ID":"2a4a0b48-b667-4c6b-acca-108007024d7a","Type":"ContainerDied","Data":"afe843a9f9f0ee9d3661d87eb649ed42446935c3b755deb11313d9f2c907e261"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.519832 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hcc5b" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.526922 4853 generic.go:334] "Generic (PLEG): container finished" podID="15690643-1b1a-4ced-9755-a8731ea4fd74" containerID="bd707f280f3dcd69ad76992abe10faf84ae5f337e8a8b1645e127dadf072d828" exitCode=0 Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.526969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerDied","Data":"bd707f280f3dcd69ad76992abe10faf84ae5f337e8a8b1645e127dadf072d828"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.526988 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"4e1aae398f15ce24d40df207f556fd52081f6e67e7d219a7cf8917c477915eb7"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.542733 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rvgxj_dbbe3472-17cc-48dd-8e46-393b00149429/kube-multus/2.log" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.543119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rvgxj" event={"ID":"dbbe3472-17cc-48dd-8e46-393b00149429","Type":"ContainerStarted","Data":"17f24cb62916d917b3f56d60c7574e29e44704fbb238893ba1c73ff9c92c0132"} Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.546639 4853 scope.go:117] "RemoveContainer" containerID="902e41148618e1099d651f2a4cbfa00ca1dd11533b086564f090dd498ecc1b95" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.569215 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqtsz"] Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.590347 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pqtsz"] Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.605404 4853 scope.go:117] "RemoveContainer" containerID="34d7101a46b49cedde91b3502f1bc962179253a1bfbe5f21599ce89a91ab905d" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.625414 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities\") pod \"2a4a0b48-b667-4c6b-acca-108007024d7a\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.625596 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content\") pod \"2a4a0b48-b667-4c6b-acca-108007024d7a\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.625663 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xg8b\" (UniqueName: \"kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b\") pod \"2a4a0b48-b667-4c6b-acca-108007024d7a\" (UID: \"2a4a0b48-b667-4c6b-acca-108007024d7a\") " Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.627444 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities" (OuterVolumeSpecName: "utilities") pod "2a4a0b48-b667-4c6b-acca-108007024d7a" (UID: "2a4a0b48-b667-4c6b-acca-108007024d7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.633476 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b" (OuterVolumeSpecName: "kube-api-access-4xg8b") pod "2a4a0b48-b667-4c6b-acca-108007024d7a" (UID: "2a4a0b48-b667-4c6b-acca-108007024d7a"). InnerVolumeSpecName "kube-api-access-4xg8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.688022 4853 scope.go:117] "RemoveContainer" containerID="28ab876013e6c4eb3a4806b0184f1d6615a478a34b6d8ee790d29bccd77f33df" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.730931 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xg8b\" (UniqueName: \"kubernetes.io/projected/2a4a0b48-b667-4c6b-acca-108007024d7a-kube-api-access-4xg8b\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.731277 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.757984 4853 scope.go:117] "RemoveContainer" containerID="979f6b48ff6c91deca2bb0784d1ec745c4c0d70abdfd6f519eb82058fec05c5f" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.764338 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a4a0b48-b667-4c6b-acca-108007024d7a" (UID: "2a4a0b48-b667-4c6b-acca-108007024d7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.768689 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="893f7e02-580a-4093-ab42-ea73ffffcfe6" path="/var/lib/kubelet/pods/893f7e02-580a-4093-ab42-ea73ffffcfe6/volumes" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.806728 4853 scope.go:117] "RemoveContainer" containerID="959a487fe36b2bf7615c5ed65bc8255b52332f6e2d973912a5e028ff35324f2f" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.834079 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a4a0b48-b667-4c6b-acca-108007024d7a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.847007 4853 scope.go:117] "RemoveContainer" containerID="ccf377912acc39f2d30013ef3bd58a8268b927fd89da2306fcf00cb89c30893e" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.855294 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.862300 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hcc5b"] Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.872426 4853 scope.go:117] "RemoveContainer" containerID="1a08eb68f9d8b071226ac64744097fe0097ab78f86242872c4e0aacdd213adbc" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.911618 4853 scope.go:117] "RemoveContainer" containerID="a7b9396399dd9787c1addcb4e2e7697d78fee817d56cafd408b08b2b7d0cc5f5" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.968612 4853 scope.go:117] "RemoveContainer" containerID="214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45" Nov 22 07:24:33 crc kubenswrapper[4853]: I1122 07:24:33.994900 4853 scope.go:117] "RemoveContainer" containerID="fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.043353 4853 scope.go:117] "RemoveContainer" containerID="8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.070372 4853 scope.go:117] "RemoveContainer" containerID="214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45" Nov 22 07:24:34 crc kubenswrapper[4853]: E1122 07:24:34.071036 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45\": container with ID starting with 214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45 not found: ID does not exist" containerID="214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.071072 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45"} err="failed to get container status \"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45\": rpc error: code = NotFound desc = could not find container \"214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45\": container with ID starting with 214be9db8c7a199942db894d3ff968007ffeb68fe4c1b3c36f29ce7a6715ef45 not found: ID does not exist" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.071095 4853 scope.go:117] "RemoveContainer" containerID="fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4" Nov 22 07:24:34 crc kubenswrapper[4853]: E1122 07:24:34.071583 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4\": container with ID starting with fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4 not found: ID does not exist" containerID="fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.071601 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4"} err="failed to get container status \"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4\": rpc error: code = NotFound desc = could not find container \"fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4\": container with ID starting with fc33635ff7f536cf5f9b9e541835085ba5d805e395459feec815c858175533c4 not found: ID does not exist" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.071612 4853 scope.go:117] "RemoveContainer" containerID="8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c" Nov 22 07:24:34 crc kubenswrapper[4853]: E1122 07:24:34.071914 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c\": container with ID starting with 8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c not found: ID does not exist" containerID="8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.071970 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c"} err="failed to get container status \"8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c\": rpc error: code = NotFound desc = could not find container \"8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c\": container with ID starting with 8546fedb7e4a4931887323ddb7571be0d3a249fa1e37a6236173eb330644439c not found: ID does not exist" Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.553301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"5cfeca9c35c52f63e5def8db6678202da1d7a0892e3fb278c2f87893b67ab2ed"} Nov 22 07:24:34 crc kubenswrapper[4853]: I1122 07:24:34.553811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"86f46112d4c46fd9b0f108d792f60d5bf0adfe6f74ace469d1642fb7848284c9"} Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.573873 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"d5999b38eabb4b3303ef5fe43926d7d4e5b4133e5ef60ace11924486f91b4ef3"} Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.574356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"e3a9944ee761c44f9f137dc55daa956857ed4aeb849bb90e17fd3d263523330a"} Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.574368 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"3e148d89267f361e26bc5e94e0884575bd4dc891e08d445c1bd3f3b2231bf779"} Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.574377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"1aa5a1ff9947499725672b5ca2aeb0250a4412e83cb44edda4c0d58a292f7e5e"} Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.707463 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c"] Nov 22 07:24:35 crc kubenswrapper[4853]: E1122 07:24:35.707823 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="extract-utilities" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.707840 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="extract-utilities" Nov 22 07:24:35 crc kubenswrapper[4853]: E1122 07:24:35.707848 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="registry-server" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.707854 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="registry-server" Nov 22 07:24:35 crc kubenswrapper[4853]: E1122 07:24:35.707869 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="extract-content" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.707875 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="extract-content" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.708014 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" containerName="registry-server" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.708572 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.711032 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.711333 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-hkzkr" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.715682 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.757597 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a4a0b48-b667-4c6b-acca-108007024d7a" path="/var/lib/kubelet/pods/2a4a0b48-b667-4c6b-acca-108007024d7a/volumes" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.766039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfnx\" (UniqueName: \"kubernetes.io/projected/f95bfaef-313c-4412-a8ce-ab9e8bd2d244-kube-api-access-qdfnx\") pod \"obo-prometheus-operator-668cf9dfbb-p974c\" (UID: \"f95bfaef-313c-4412-a8ce-ab9e8bd2d244\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.773153 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4"] Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.774422 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.776942 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-gwkqj" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.777263 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.784398 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx"] Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.785721 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.867297 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.867399 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.867468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.867583 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.867665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfnx\" (UniqueName: \"kubernetes.io/projected/f95bfaef-313c-4412-a8ce-ab9e8bd2d244-kube-api-access-qdfnx\") pod \"obo-prometheus-operator-668cf9dfbb-p974c\" (UID: \"f95bfaef-313c-4412-a8ce-ab9e8bd2d244\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.890486 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfnx\" (UniqueName: \"kubernetes.io/projected/f95bfaef-313c-4412-a8ce-ab9e8bd2d244-kube-api-access-qdfnx\") pod \"obo-prometheus-operator-668cf9dfbb-p974c\" (UID: \"f95bfaef-313c-4412-a8ce-ab9e8bd2d244\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.944086 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6mnv6"] Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.945216 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.947465 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-m8vn9" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.955430 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.969611 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.969672 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.969718 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.969765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.975949 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.977578 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.978950 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/988cd804-b3e5-4b0f-aec4-cc7186845189-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx\" (UID: \"988cd804-b3e5-4b0f-aec4-cc7186845189\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:35 crc kubenswrapper[4853]: I1122 07:24:35.982427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6204a708-d77f-4350-806f-25ef39e98551-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4\" (UID: \"6204a708-d77f-4350-806f-25ef39e98551\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.029868 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.062777 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(1e7663e5e6b53cb57f64d95314bc8d14163cad8398b217ba5b63b0cbd85e89d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.062869 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(1e7663e5e6b53cb57f64d95314bc8d14163cad8398b217ba5b63b0cbd85e89d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.062898 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(1e7663e5e6b53cb57f64d95314bc8d14163cad8398b217ba5b63b0cbd85e89d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.062957 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators(f95bfaef-313c-4412-a8ce-ab9e8bd2d244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators(f95bfaef-313c-4412-a8ce-ab9e8bd2d244)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(1e7663e5e6b53cb57f64d95314bc8d14163cad8398b217ba5b63b0cbd85e89d3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" podUID="f95bfaef-313c-4412-a8ce-ab9e8bd2d244" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.071431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmbb\" (UniqueName: \"kubernetes.io/projected/838479bf-7b77-403c-915a-ed8b62d9c970-kube-api-access-5pmbb\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.071529 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/838479bf-7b77-403c-915a-ed8b62d9c970-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.098783 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.111868 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.139067 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(d8df4a5d8bb71b4c497e281c0374f52486d6413066b94435851b07866525d2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.139166 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(d8df4a5d8bb71b4c497e281c0374f52486d6413066b94435851b07866525d2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.139209 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(d8df4a5d8bb71b4c497e281c0374f52486d6413066b94435851b07866525d2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.139293 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators(6204a708-d77f-4350-806f-25ef39e98551)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators(6204a708-d77f-4350-806f-25ef39e98551)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(d8df4a5d8bb71b4c497e281c0374f52486d6413066b94435851b07866525d2e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" podUID="6204a708-d77f-4350-806f-25ef39e98551" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.174933 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2be82de49838530088e3c2d8d782403cb20ca9629f23d5457ef8bffc93bb5f6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.175007 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2be82de49838530088e3c2d8d782403cb20ca9629f23d5457ef8bffc93bb5f6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.175033 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2be82de49838530088e3c2d8d782403cb20ca9629f23d5457ef8bffc93bb5f6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.175090 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators(988cd804-b3e5-4b0f-aec4-cc7186845189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators(988cd804-b3e5-4b0f-aec4-cc7186845189)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2be82de49838530088e3c2d8d782403cb20ca9629f23d5457ef8bffc93bb5f6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" podUID="988cd804-b3e5-4b0f-aec4-cc7186845189" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.175947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmbb\" (UniqueName: \"kubernetes.io/projected/838479bf-7b77-403c-915a-ed8b62d9c970-kube-api-access-5pmbb\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.176050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/838479bf-7b77-403c-915a-ed8b62d9c970-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.185832 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/838479bf-7b77-403c-915a-ed8b62d9c970-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.196417 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-56f68"] Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.202679 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.208215 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rcgtc" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.224358 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmbb\" (UniqueName: \"kubernetes.io/projected/838479bf-7b77-403c-915a-ed8b62d9c970-kube-api-access-5pmbb\") pod \"observability-operator-d8bb48f5d-6mnv6\" (UID: \"838479bf-7b77-403c-915a-ed8b62d9c970\") " pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.273283 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.283045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bea3315-6c33-4754-95a6-e465983de5b7-openshift-service-ca\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.283199 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qdrj\" (UniqueName: \"kubernetes.io/projected/0bea3315-6c33-4754-95a6-e465983de5b7-kube-api-access-2qdrj\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.321934 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(3abd31d811ab863c5374530ee957107f4ff125df958dd392e28b6054e354750e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.322011 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(3abd31d811ab863c5374530ee957107f4ff125df958dd392e28b6054e354750e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.322038 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(3abd31d811ab863c5374530ee957107f4ff125df958dd392e28b6054e354750e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.322081 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-6mnv6_openshift-operators(838479bf-7b77-403c-915a-ed8b62d9c970)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-6mnv6_openshift-operators(838479bf-7b77-403c-915a-ed8b62d9c970)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(3abd31d811ab863c5374530ee957107f4ff125df958dd392e28b6054e354750e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" podUID="838479bf-7b77-403c-915a-ed8b62d9c970" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.385179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bea3315-6c33-4754-95a6-e465983de5b7-openshift-service-ca\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.385352 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qdrj\" (UniqueName: \"kubernetes.io/projected/0bea3315-6c33-4754-95a6-e465983de5b7-kube-api-access-2qdrj\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.386303 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0bea3315-6c33-4754-95a6-e465983de5b7-openshift-service-ca\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.414893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qdrj\" (UniqueName: \"kubernetes.io/projected/0bea3315-6c33-4754-95a6-e465983de5b7-kube-api-access-2qdrj\") pod \"perses-operator-5446b9c989-56f68\" (UID: \"0bea3315-6c33-4754-95a6-e465983de5b7\") " pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: I1122 07:24:36.573916 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.607270 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(d139e1d2796d3bbc7d7ce9a092ded1d1a0c51620a14826ccbf9d8ee6ac2c6c64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.607375 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(d139e1d2796d3bbc7d7ce9a092ded1d1a0c51620a14826ccbf9d8ee6ac2c6c64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.607437 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(d139e1d2796d3bbc7d7ce9a092ded1d1a0c51620a14826ccbf9d8ee6ac2c6c64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:36 crc kubenswrapper[4853]: E1122 07:24:36.607508 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-56f68_openshift-operators(0bea3315-6c33-4754-95a6-e465983de5b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-56f68_openshift-operators(0bea3315-6c33-4754-95a6-e465983de5b7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(d139e1d2796d3bbc7d7ce9a092ded1d1a0c51620a14826ccbf9d8ee6ac2c6c64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-56f68" podUID="0bea3315-6c33-4754-95a6-e465983de5b7" Nov 22 07:24:37 crc kubenswrapper[4853]: I1122 07:24:37.596232 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"cc1246d2953f70b9bc2ccba3fe847b5d2395dcd1188f4164e63c75429a213759"} Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.628850 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" event={"ID":"15690643-1b1a-4ced-9755-a8731ea4fd74","Type":"ContainerStarted","Data":"7cd3f9fd951384db1061ed8970510c32403de40ffb4613e19ef83cb6d562b66e"} Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.629965 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.630114 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.630198 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.664277 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.666003 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:24:40 crc kubenswrapper[4853]: I1122 07:24:40.675461 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" podStartSLOduration=8.675443255 podStartE2EDuration="8.675443255s" podCreationTimestamp="2025-11-22 07:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:40.667865096 +0000 UTC m=+879.508487722" watchObservedRunningTime="2025-11-22 07:24:40.675443255 +0000 UTC m=+879.516065881" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.631261 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c"] Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.631425 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.632024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.662531 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4"] Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.662663 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.663229 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.666986 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(b5f0fc727edbcab71ea7fd9ab65087e5da05ba2b4cd03a8c5f82361d09f83f63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.667047 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(b5f0fc727edbcab71ea7fd9ab65087e5da05ba2b4cd03a8c5f82361d09f83f63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.667081 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(b5f0fc727edbcab71ea7fd9ab65087e5da05ba2b4cd03a8c5f82361d09f83f63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.667128 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators(f95bfaef-313c-4412-a8ce-ab9e8bd2d244)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators(f95bfaef-313c-4412-a8ce-ab9e8bd2d244)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators_f95bfaef-313c-4412-a8ce-ab9e8bd2d244_0(b5f0fc727edbcab71ea7fd9ab65087e5da05ba2b4cd03a8c5f82361d09f83f63): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" podUID="f95bfaef-313c-4412-a8ce-ab9e8bd2d244" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.717148 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6mnv6"] Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.717356 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.721376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.737928 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(82452605384e62fdfa57126c5dc38c1d69389c34f87ef4a63c5e0b4b10bed932): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.738033 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(82452605384e62fdfa57126c5dc38c1d69389c34f87ef4a63c5e0b4b10bed932): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.738060 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(82452605384e62fdfa57126c5dc38c1d69389c34f87ef4a63c5e0b4b10bed932): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.738119 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators(6204a708-d77f-4350-806f-25ef39e98551)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators(6204a708-d77f-4350-806f-25ef39e98551)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators_6204a708-d77f-4350-806f-25ef39e98551_0(82452605384e62fdfa57126c5dc38c1d69389c34f87ef4a63c5e0b4b10bed932): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" podUID="6204a708-d77f-4350-806f-25ef39e98551" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.758494 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-56f68"] Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.758638 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.761284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.791722 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx"] Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.791914 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:41 crc kubenswrapper[4853]: I1122 07:24:41.793091 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.798984 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(ef4c8d9e7c8394021a2d192f8ef50fbbe3af245acea7498f3de7252e7c1ae3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.799096 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(ef4c8d9e7c8394021a2d192f8ef50fbbe3af245acea7498f3de7252e7c1ae3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.799144 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(ef4c8d9e7c8394021a2d192f8ef50fbbe3af245acea7498f3de7252e7c1ae3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.799225 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-6mnv6_openshift-operators(838479bf-7b77-403c-915a-ed8b62d9c970)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-6mnv6_openshift-operators(838479bf-7b77-403c-915a-ed8b62d9c970)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-6mnv6_openshift-operators_838479bf-7b77-403c-915a-ed8b62d9c970_0(ef4c8d9e7c8394021a2d192f8ef50fbbe3af245acea7498f3de7252e7c1ae3b7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" podUID="838479bf-7b77-403c-915a-ed8b62d9c970" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866091 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(0e84df46b1983f3af1055dcaacd06215697e9017106b20a31ba3d2ed0984a560): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866218 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(0e84df46b1983f3af1055dcaacd06215697e9017106b20a31ba3d2ed0984a560): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866255 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(0e84df46b1983f3af1055dcaacd06215697e9017106b20a31ba3d2ed0984a560): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866338 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-56f68_openshift-operators(0bea3315-6c33-4754-95a6-e465983de5b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-56f68_openshift-operators(0bea3315-6c33-4754-95a6-e465983de5b7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-56f68_openshift-operators_0bea3315-6c33-4754-95a6-e465983de5b7_0(0e84df46b1983f3af1055dcaacd06215697e9017106b20a31ba3d2ed0984a560): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-56f68" podUID="0bea3315-6c33-4754-95a6-e465983de5b7" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866109 4853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2d5591ba19c6917981b7fe0a90322fcc692b292169ce24c65d103df851a76ce8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866590 4853 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2d5591ba19c6917981b7fe0a90322fcc692b292169ce24c65d103df851a76ce8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866608 4853 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2d5591ba19c6917981b7fe0a90322fcc692b292169ce24c65d103df851a76ce8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:41 crc kubenswrapper[4853]: E1122 07:24:41.866639 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators(988cd804-b3e5-4b0f-aec4-cc7186845189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators(988cd804-b3e5-4b0f-aec4-cc7186845189)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators_988cd804-b3e5-4b0f-aec4-cc7186845189_0(2d5591ba19c6917981b7fe0a90322fcc692b292169ce24c65d103df851a76ce8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" podUID="988cd804-b3e5-4b0f-aec4-cc7186845189" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.545890 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.548540 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.571727 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.646891 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksg2\" (UniqueName: \"kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.647263 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.647371 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.748858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dksg2\" (UniqueName: \"kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.748954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.749013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.749781 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.750071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.776661 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dksg2\" (UniqueName: \"kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2\") pod \"redhat-marketplace-d96fl\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:45 crc kubenswrapper[4853]: I1122 07:24:45.871949 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:46 crc kubenswrapper[4853]: I1122 07:24:46.237969 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:24:46 crc kubenswrapper[4853]: W1122 07:24:46.254037 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f14b95f_f300_4d66_bbaa_5d92b3ffe1d0.slice/crio-1350442caeceafb9568e69c645f376e4fce17f8da2b74619d27db11f4bb77f4a WatchSource:0}: Error finding container 1350442caeceafb9568e69c645f376e4fce17f8da2b74619d27db11f4bb77f4a: Status 404 returned error can't find the container with id 1350442caeceafb9568e69c645f376e4fce17f8da2b74619d27db11f4bb77f4a Nov 22 07:24:46 crc kubenswrapper[4853]: I1122 07:24:46.675262 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerStarted","Data":"d358a219c82998d12c528a6a37e882325de018a32925c0c43fc588b9b5e19963"} Nov 22 07:24:46 crc kubenswrapper[4853]: I1122 07:24:46.675360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerStarted","Data":"1350442caeceafb9568e69c645f376e4fce17f8da2b74619d27db11f4bb77f4a"} Nov 22 07:24:47 crc kubenswrapper[4853]: I1122 07:24:47.683153 4853 generic.go:334] "Generic (PLEG): container finished" podID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerID="d358a219c82998d12c528a6a37e882325de018a32925c0c43fc588b9b5e19963" exitCode=0 Nov 22 07:24:47 crc kubenswrapper[4853]: I1122 07:24:47.683683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerDied","Data":"d358a219c82998d12c528a6a37e882325de018a32925c0c43fc588b9b5e19963"} Nov 22 07:24:49 crc kubenswrapper[4853]: I1122 07:24:49.699484 4853 generic.go:334] "Generic (PLEG): container finished" podID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerID="994d606b1b55b275519883806d010b09845d81423fcfba7ed5bc57177994e1d9" exitCode=0 Nov 22 07:24:49 crc kubenswrapper[4853]: I1122 07:24:49.699681 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerDied","Data":"994d606b1b55b275519883806d010b09845d81423fcfba7ed5bc57177994e1d9"} Nov 22 07:24:51 crc kubenswrapper[4853]: I1122 07:24:51.715660 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerStarted","Data":"6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488"} Nov 22 07:24:51 crc kubenswrapper[4853]: I1122 07:24:51.739958 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d96fl" podStartSLOduration=3.594192104 podStartE2EDuration="6.739931387s" podCreationTimestamp="2025-11-22 07:24:45 +0000 UTC" firstStartedPulling="2025-11-22 07:24:47.68611325 +0000 UTC m=+886.526735876" lastFinishedPulling="2025-11-22 07:24:50.831852533 +0000 UTC m=+889.672475159" observedRunningTime="2025-11-22 07:24:51.736933185 +0000 UTC m=+890.577555831" watchObservedRunningTime="2025-11-22 07:24:51.739931387 +0000 UTC m=+890.580554013" Nov 22 07:24:53 crc kubenswrapper[4853]: I1122 07:24:53.748096 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:53 crc kubenswrapper[4853]: I1122 07:24:53.748156 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:53 crc kubenswrapper[4853]: I1122 07:24:53.748435 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:24:53 crc kubenswrapper[4853]: I1122 07:24:53.749020 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" Nov 22 07:24:54 crc kubenswrapper[4853]: I1122 07:24:54.045709 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-6mnv6"] Nov 22 07:24:54 crc kubenswrapper[4853]: I1122 07:24:54.356076 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c"] Nov 22 07:24:54 crc kubenswrapper[4853]: I1122 07:24:54.736792 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" event={"ID":"838479bf-7b77-403c-915a-ed8b62d9c970","Type":"ContainerStarted","Data":"64a3caea06af3835f32364313a269217a5fdcec9bb1406de8a9131d3055183ea"} Nov 22 07:24:54 crc kubenswrapper[4853]: I1122 07:24:54.738534 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" event={"ID":"f95bfaef-313c-4412-a8ce-ab9e8bd2d244","Type":"ContainerStarted","Data":"e95b96e1ef5ae5cdb5c7c84f3a8777c6ca742be51285e802443f3c1917e0d53d"} Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.746958 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.747058 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.753919 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.754040 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.872859 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.873499 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:55 crc kubenswrapper[4853]: I1122 07:24:55.931975 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.064851 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4"] Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.335898 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-56f68"] Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.746686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.747239 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.754555 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-56f68" event={"ID":"0bea3315-6c33-4754-95a6-e465983de5b7","Type":"ContainerStarted","Data":"a479cb54ad71a9d95202166e56278867d7fa23acc80f6d7fbbe322402b695126"} Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.757210 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" event={"ID":"6204a708-d77f-4350-806f-25ef39e98551","Type":"ContainerStarted","Data":"37e57acedfd55af3d992d5c776703da0e486e4a8ac4e5a98f4a73dcf9874768e"} Nov 22 07:24:56 crc kubenswrapper[4853]: I1122 07:24:56.811324 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:24:57 crc kubenswrapper[4853]: I1122 07:24:57.248774 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx"] Nov 22 07:24:57 crc kubenswrapper[4853]: W1122 07:24:57.287152 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988cd804_b3e5_4b0f_aec4_cc7186845189.slice/crio-bdf904b3b75feb99d4f55b430cbe8e62c4c6fd78b9816576b6a0f5277cb1b2f6 WatchSource:0}: Error finding container bdf904b3b75feb99d4f55b430cbe8e62c4c6fd78b9816576b6a0f5277cb1b2f6: Status 404 returned error can't find the container with id bdf904b3b75feb99d4f55b430cbe8e62c4c6fd78b9816576b6a0f5277cb1b2f6 Nov 22 07:24:57 crc kubenswrapper[4853]: I1122 07:24:57.785044 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" event={"ID":"988cd804-b3e5-4b0f-aec4-cc7186845189","Type":"ContainerStarted","Data":"bdf904b3b75feb99d4f55b430cbe8e62c4c6fd78b9816576b6a0f5277cb1b2f6"} Nov 22 07:24:58 crc kubenswrapper[4853]: I1122 07:24:58.326695 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:24:59 crc kubenswrapper[4853]: I1122 07:24:59.824174 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d96fl" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" containerID="cri-o://6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" gracePeriod=2 Nov 22 07:25:03 crc kubenswrapper[4853]: I1122 07:25:03.081193 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8b45j" Nov 22 07:25:05 crc kubenswrapper[4853]: E1122 07:25:05.874916 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:05 crc kubenswrapper[4853]: E1122 07:25:05.875878 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:05 crc kubenswrapper[4853]: E1122 07:25:05.876341 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:05 crc kubenswrapper[4853]: E1122 07:25:05.876398 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d96fl" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:25:06 crc kubenswrapper[4853]: I1122 07:25:06.929703 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:25:06 crc kubenswrapper[4853]: I1122 07:25:06.931335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:06 crc kubenswrapper[4853]: I1122 07:25:06.960374 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.056246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lpfx\" (UniqueName: \"kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.056451 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.056492 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.158550 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.158699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.158861 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lpfx\" (UniqueName: \"kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.159251 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.159366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.185560 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lpfx\" (UniqueName: \"kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx\") pod \"community-operators-6qzqw\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:07 crc kubenswrapper[4853]: I1122 07:25:07.276539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:09 crc kubenswrapper[4853]: I1122 07:25:09.606214 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d96fl_0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0/registry-server/0.log" Nov 22 07:25:09 crc kubenswrapper[4853]: I1122 07:25:09.608254 4853 generic.go:334] "Generic (PLEG): container finished" podID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" exitCode=-1 Nov 22 07:25:09 crc kubenswrapper[4853]: I1122 07:25:09.608292 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerDied","Data":"6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488"} Nov 22 07:25:15 crc kubenswrapper[4853]: E1122 07:25:15.873930 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:15 crc kubenswrapper[4853]: E1122 07:25:15.875563 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:15 crc kubenswrapper[4853]: E1122 07:25:15.876430 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:15 crc kubenswrapper[4853]: E1122 07:25:15.876474 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d96fl" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:25:17 crc kubenswrapper[4853]: I1122 07:25:17.952380 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7jnds container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:25:17 crc kubenswrapper[4853]: I1122 07:25:17.952867 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podUID="472b3cc8-386e-4828-a725-263057fb299b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:25:17 crc kubenswrapper[4853]: I1122 07:25:17.952516 4853 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7jnds container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:25:17 crc kubenswrapper[4853]: I1122 07:25:17.952980 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7jnds" podUID="472b3cc8-386e-4828-a725-263057fb299b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:25:25 crc kubenswrapper[4853]: E1122 07:25:25.873612 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:25 crc kubenswrapper[4853]: E1122 07:25:25.875231 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:25 crc kubenswrapper[4853]: E1122 07:25:25.875806 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:25 crc kubenswrapper[4853]: E1122 07:25:25.875839 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-d96fl" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.741902 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.743886 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.753118 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.805997 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhcx\" (UniqueName: \"kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.806166 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.806880 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.908666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.908728 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhcx\" (UniqueName: \"kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.908776 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.909300 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.909336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:28 crc kubenswrapper[4853]: I1122 07:25:28.934992 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhcx\" (UniqueName: \"kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx\") pod \"certified-operators-92tvk\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:29 crc kubenswrapper[4853]: I1122 07:25:29.077240 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.858679 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d96fl_0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0/registry-server/0.log" Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.859305 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.970948 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dksg2\" (UniqueName: \"kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2\") pod \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.971661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities\") pod \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.971819 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content\") pod \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\" (UID: \"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0\") " Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.972500 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities" (OuterVolumeSpecName: "utilities") pod "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" (UID: "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.979334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2" (OuterVolumeSpecName: "kube-api-access-dksg2") pod "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" (UID: "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0"). InnerVolumeSpecName "kube-api-access-dksg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:31 crc kubenswrapper[4853]: I1122 07:25:31.991875 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" (UID: "0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.072982 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dksg2\" (UniqueName: \"kubernetes.io/projected/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-kube-api-access-dksg2\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.073038 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.073050 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.778701 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d96fl_0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0/registry-server/0.log" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.779853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d96fl" event={"ID":"0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0","Type":"ContainerDied","Data":"1350442caeceafb9568e69c645f376e4fce17f8da2b74619d27db11f4bb77f4a"} Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.779933 4853 scope.go:117] "RemoveContainer" containerID="6a0b791929b94aacb61e050ab95ca5ab93b1fecc34c2b897f7820fe77d0f1488" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.779965 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d96fl" Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.813268 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:25:32 crc kubenswrapper[4853]: I1122 07:25:32.817310 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d96fl"] Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.443984 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2812010609/2\": happened during read: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.444700 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_openshift-operators(6204a708-d77f-4350-806f-25ef39e98551): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage2812010609/2\": happened during read: context canceled" logger="UnhandledError" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.445946 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage2812010609/2\\\": happened during read: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" podUID="6204a708-d77f-4350-806f-25ef39e98551" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.483151 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.483473 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_openshift-operators(988cd804-b3e5-4b0f-aec4-cc7186845189): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.484909 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" podUID="988cd804-b3e5-4b0f-aec4-cc7186845189" Nov 22 07:25:33 crc kubenswrapper[4853]: I1122 07:25:33.758361 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" path="/var/lib/kubelet/pods/0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0/volumes" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.791398 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" podUID="6204a708-d77f-4350-806f-25ef39e98551" Nov 22 07:25:33 crc kubenswrapper[4853]: E1122 07:25:33.791946 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" podUID="988cd804-b3e5-4b0f-aec4-cc7186845189" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.090259 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.091002 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5pmbb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-6mnv6_openshift-operators(838479bf-7b77-403c-915a-ed8b62d9c970): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.092312 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" podUID="838479bf-7b77-403c-915a-ed8b62d9c970" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.812664 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" podUID="838479bf-7b77-403c-915a-ed8b62d9c970" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.859409 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.859630 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdfnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-p974c_openshift-operators(f95bfaef-313c-4412-a8ce-ab9e8bd2d244): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:25:36 crc kubenswrapper[4853]: E1122 07:25:36.860845 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" podUID="f95bfaef-313c-4412-a8ce-ab9e8bd2d244" Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.308452 4853 scope.go:117] "RemoveContainer" containerID="994d606b1b55b275519883806d010b09845d81423fcfba7ed5bc57177994e1d9" Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.337258 4853 scope.go:117] "RemoveContainer" containerID="d358a219c82998d12c528a6a37e882325de018a32925c0c43fc588b9b5e19963" Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.548114 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:25:37 crc kubenswrapper[4853]: W1122 07:25:37.553982 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffbbb61_0428_456b_bbec_259deacac8e4.slice/crio-fbb0df3a557f26bbeb28d0fae300aae9ff6c4fb9319ac466e0294cfc5ac61f37 WatchSource:0}: Error finding container fbb0df3a557f26bbeb28d0fae300aae9ff6c4fb9319ac466e0294cfc5ac61f37: Status 404 returned error can't find the container with id fbb0df3a557f26bbeb28d0fae300aae9ff6c4fb9319ac466e0294cfc5ac61f37 Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.618143 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.819957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerStarted","Data":"18a5e091271adf1dbb8e042243fb0d8a260c08128fbd39dd211977cfb5551df9"} Nov 22 07:25:37 crc kubenswrapper[4853]: I1122 07:25:37.821626 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerStarted","Data":"fbb0df3a557f26bbeb28d0fae300aae9ff6c4fb9319ac466e0294cfc5ac61f37"} Nov 22 07:25:37 crc kubenswrapper[4853]: E1122 07:25:37.824260 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" podUID="f95bfaef-313c-4412-a8ce-ab9e8bd2d244" Nov 22 07:25:38 crc kubenswrapper[4853]: I1122 07:25:38.828926 4853 generic.go:334] "Generic (PLEG): container finished" podID="66a448cd-a783-47ee-aeee-080613615f6f" containerID="0e08cbca864aa99eaf1c0fe2cb94a110b33fa997ed533f02eadf09970960e98a" exitCode=0 Nov 22 07:25:38 crc kubenswrapper[4853]: I1122 07:25:38.828977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerDied","Data":"0e08cbca864aa99eaf1c0fe2cb94a110b33fa997ed533f02eadf09970960e98a"} Nov 22 07:25:38 crc kubenswrapper[4853]: I1122 07:25:38.831991 4853 generic.go:334] "Generic (PLEG): container finished" podID="affbbb61-0428-456b-bbec-259deacac8e4" containerID="497aa9bef308dc40f54f0960af5d954aa541d8b0abc6c1ce497916895ec70697" exitCode=0 Nov 22 07:25:38 crc kubenswrapper[4853]: I1122 07:25:38.832066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerDied","Data":"497aa9bef308dc40f54f0960af5d954aa541d8b0abc6c1ce497916895ec70697"} Nov 22 07:25:39 crc kubenswrapper[4853]: I1122 07:25:39.839973 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-56f68" event={"ID":"0bea3315-6c33-4754-95a6-e465983de5b7","Type":"ContainerStarted","Data":"f13c167bebc43d1a22784632919a172557c4217b1d1cd6958e325e05877aa875"} Nov 22 07:25:39 crc kubenswrapper[4853]: I1122 07:25:39.840660 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:25:39 crc kubenswrapper[4853]: I1122 07:25:39.865023 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-56f68" podStartSLOduration=21.492629746 podStartE2EDuration="1m3.86499477s" podCreationTimestamp="2025-11-22 07:24:36 +0000 UTC" firstStartedPulling="2025-11-22 07:24:56.364116838 +0000 UTC m=+895.204739464" lastFinishedPulling="2025-11-22 07:25:38.736481822 +0000 UTC m=+937.577104488" observedRunningTime="2025-11-22 07:25:39.862221845 +0000 UTC m=+938.702844481" watchObservedRunningTime="2025-11-22 07:25:39.86499477 +0000 UTC m=+938.705617416" Nov 22 07:25:41 crc kubenswrapper[4853]: I1122 07:25:41.857930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerDied","Data":"f13a5b42762b8a0245697635d07bdae8ca0cf890317351e82205171a6d1e2950"} Nov 22 07:25:41 crc kubenswrapper[4853]: I1122 07:25:41.858019 4853 generic.go:334] "Generic (PLEG): container finished" podID="affbbb61-0428-456b-bbec-259deacac8e4" containerID="f13a5b42762b8a0245697635d07bdae8ca0cf890317351e82205171a6d1e2950" exitCode=0 Nov 22 07:25:41 crc kubenswrapper[4853]: I1122 07:25:41.863496 4853 generic.go:334] "Generic (PLEG): container finished" podID="66a448cd-a783-47ee-aeee-080613615f6f" containerID="5f300795df1a3941bbd337b8a84e2cd5769188e05a749bc60121ec1de9c0a344" exitCode=0 Nov 22 07:25:41 crc kubenswrapper[4853]: I1122 07:25:41.863619 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerDied","Data":"5f300795df1a3941bbd337b8a84e2cd5769188e05a749bc60121ec1de9c0a344"} Nov 22 07:25:42 crc kubenswrapper[4853]: I1122 07:25:42.874046 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerStarted","Data":"f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06"} Nov 22 07:25:42 crc kubenswrapper[4853]: I1122 07:25:42.898817 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92tvk" podStartSLOduration=11.366080949 podStartE2EDuration="14.898797172s" podCreationTimestamp="2025-11-22 07:25:28 +0000 UTC" firstStartedPulling="2025-11-22 07:25:38.832440334 +0000 UTC m=+937.673062980" lastFinishedPulling="2025-11-22 07:25:42.365156577 +0000 UTC m=+941.205779203" observedRunningTime="2025-11-22 07:25:42.892225965 +0000 UTC m=+941.732848591" watchObservedRunningTime="2025-11-22 07:25:42.898797172 +0000 UTC m=+941.739419798" Nov 22 07:25:43 crc kubenswrapper[4853]: I1122 07:25:43.884803 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerStarted","Data":"53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad"} Nov 22 07:25:44 crc kubenswrapper[4853]: I1122 07:25:44.798328 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6qzqw" podStartSLOduration=34.842751253 podStartE2EDuration="38.79830029s" podCreationTimestamp="2025-11-22 07:25:06 +0000 UTC" firstStartedPulling="2025-11-22 07:25:38.835571928 +0000 UTC m=+937.676194564" lastFinishedPulling="2025-11-22 07:25:42.791120975 +0000 UTC m=+941.631743601" observedRunningTime="2025-11-22 07:25:43.905162534 +0000 UTC m=+942.745785160" watchObservedRunningTime="2025-11-22 07:25:44.79830029 +0000 UTC m=+943.638922916" Nov 22 07:25:46 crc kubenswrapper[4853]: I1122 07:25:46.578774 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-56f68" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.277532 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.278005 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.316928 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.914919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" event={"ID":"988cd804-b3e5-4b0f-aec4-cc7186845189","Type":"ContainerStarted","Data":"e66227f72ba1a1215a103881d85b0ec3f2eb7f3e3d9d6bfca6b60effd2a8e98c"} Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.916936 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" event={"ID":"6204a708-d77f-4350-806f-25ef39e98551","Type":"ContainerStarted","Data":"f9c7404143238aa3850a0fd43009db101da876324b9204f0ba5334480b9769cd"} Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.938862 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx" podStartSLOduration=-9223371963.915937 podStartE2EDuration="1m12.938839022s" podCreationTimestamp="2025-11-22 07:24:35 +0000 UTC" firstStartedPulling="2025-11-22 07:24:57.296942645 +0000 UTC m=+896.137565271" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:25:47.934330511 +0000 UTC m=+946.774953137" watchObservedRunningTime="2025-11-22 07:25:47.938839022 +0000 UTC m=+946.779461648" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.964077 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4" podStartSLOduration=22.650112989 podStartE2EDuration="1m12.96404815s" podCreationTimestamp="2025-11-22 07:24:35 +0000 UTC" firstStartedPulling="2025-11-22 07:24:56.083967269 +0000 UTC m=+894.924589895" lastFinishedPulling="2025-11-22 07:25:46.39790239 +0000 UTC m=+945.238525056" observedRunningTime="2025-11-22 07:25:47.953587788 +0000 UTC m=+946.794210424" watchObservedRunningTime="2025-11-22 07:25:47.96404815 +0000 UTC m=+946.804670776" Nov 22 07:25:47 crc kubenswrapper[4853]: I1122 07:25:47.982675 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:25:49 crc kubenswrapper[4853]: I1122 07:25:49.077703 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:49 crc kubenswrapper[4853]: I1122 07:25:49.078114 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:49 crc kubenswrapper[4853]: I1122 07:25:49.119616 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:49 crc kubenswrapper[4853]: I1122 07:25:49.982159 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:25:50 crc kubenswrapper[4853]: I1122 07:25:50.529227 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:25:50 crc kubenswrapper[4853]: I1122 07:25:50.529944 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6qzqw" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="registry-server" containerID="cri-o://53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" gracePeriod=2 Nov 22 07:25:52 crc kubenswrapper[4853]: I1122 07:25:52.720808 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:25:52 crc kubenswrapper[4853]: I1122 07:25:52.721140 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92tvk" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="registry-server" containerID="cri-o://f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" gracePeriod=2 Nov 22 07:25:52 crc kubenswrapper[4853]: I1122 07:25:52.954396 4853 generic.go:334] "Generic (PLEG): container finished" podID="affbbb61-0428-456b-bbec-259deacac8e4" containerID="53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" exitCode=0 Nov 22 07:25:52 crc kubenswrapper[4853]: I1122 07:25:52.954444 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerDied","Data":"53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad"} Nov 22 07:25:55 crc kubenswrapper[4853]: I1122 07:25:55.979766 4853 generic.go:334] "Generic (PLEG): container finished" podID="66a448cd-a783-47ee-aeee-080613615f6f" containerID="f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" exitCode=0 Nov 22 07:25:55 crc kubenswrapper[4853]: I1122 07:25:55.979779 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerDied","Data":"f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06"} Nov 22 07:25:57 crc kubenswrapper[4853]: E1122 07:25:57.277637 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad is running failed: container process not found" containerID="53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:57 crc kubenswrapper[4853]: E1122 07:25:57.278264 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad is running failed: container process not found" containerID="53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:57 crc kubenswrapper[4853]: E1122 07:25:57.278530 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad is running failed: container process not found" containerID="53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:57 crc kubenswrapper[4853]: E1122 07:25:57.278572 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-6qzqw" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="registry-server" Nov 22 07:25:59 crc kubenswrapper[4853]: E1122 07:25:59.079217 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06 is running failed: container process not found" containerID="f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:59 crc kubenswrapper[4853]: E1122 07:25:59.079794 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06 is running failed: container process not found" containerID="f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:59 crc kubenswrapper[4853]: E1122 07:25:59.080201 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06 is running failed: container process not found" containerID="f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:25:59 crc kubenswrapper[4853]: E1122 07:25:59.080307 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-92tvk" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="registry-server" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.599726 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.640094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lpfx\" (UniqueName: \"kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx\") pod \"affbbb61-0428-456b-bbec-259deacac8e4\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.640211 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content\") pod \"affbbb61-0428-456b-bbec-259deacac8e4\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.640245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities\") pod \"affbbb61-0428-456b-bbec-259deacac8e4\" (UID: \"affbbb61-0428-456b-bbec-259deacac8e4\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.641539 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities" (OuterVolumeSpecName: "utilities") pod "affbbb61-0428-456b-bbec-259deacac8e4" (UID: "affbbb61-0428-456b-bbec-259deacac8e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.658328 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx" (OuterVolumeSpecName: "kube-api-access-5lpfx") pod "affbbb61-0428-456b-bbec-259deacac8e4" (UID: "affbbb61-0428-456b-bbec-259deacac8e4"). InnerVolumeSpecName "kube-api-access-5lpfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.720552 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affbbb61-0428-456b-bbec-259deacac8e4" (UID: "affbbb61-0428-456b-bbec-259deacac8e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.742157 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.742200 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affbbb61-0428-456b-bbec-259deacac8e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.742211 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lpfx\" (UniqueName: \"kubernetes.io/projected/affbbb61-0428-456b-bbec-259deacac8e4-kube-api-access-5lpfx\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.815595 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.843573 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities\") pod \"66a448cd-a783-47ee-aeee-080613615f6f\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.843687 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content\") pod \"66a448cd-a783-47ee-aeee-080613615f6f\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.843853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwhcx\" (UniqueName: \"kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx\") pod \"66a448cd-a783-47ee-aeee-080613615f6f\" (UID: \"66a448cd-a783-47ee-aeee-080613615f6f\") " Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.845554 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities" (OuterVolumeSpecName: "utilities") pod "66a448cd-a783-47ee-aeee-080613615f6f" (UID: "66a448cd-a783-47ee-aeee-080613615f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.850159 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx" (OuterVolumeSpecName: "kube-api-access-xwhcx") pod "66a448cd-a783-47ee-aeee-080613615f6f" (UID: "66a448cd-a783-47ee-aeee-080613615f6f"). InnerVolumeSpecName "kube-api-access-xwhcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.893001 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66a448cd-a783-47ee-aeee-080613615f6f" (UID: "66a448cd-a783-47ee-aeee-080613615f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.946054 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.946098 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwhcx\" (UniqueName: \"kubernetes.io/projected/66a448cd-a783-47ee-aeee-080613615f6f-kube-api-access-xwhcx\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:04 crc kubenswrapper[4853]: I1122 07:26:04.946111 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66a448cd-a783-47ee-aeee-080613615f6f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.062272 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92tvk" event={"ID":"66a448cd-a783-47ee-aeee-080613615f6f","Type":"ContainerDied","Data":"18a5e091271adf1dbb8e042243fb0d8a260c08128fbd39dd211977cfb5551df9"} Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.062317 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92tvk" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.062389 4853 scope.go:117] "RemoveContainer" containerID="f982390b3ef11c41615b4cef996e3af58bff8762e97e7023b80717161a851a06" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.074456 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6qzqw" event={"ID":"affbbb61-0428-456b-bbec-259deacac8e4","Type":"ContainerDied","Data":"fbb0df3a557f26bbeb28d0fae300aae9ff6c4fb9319ac466e0294cfc5ac61f37"} Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.074574 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6qzqw" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.088097 4853 scope.go:117] "RemoveContainer" containerID="5f300795df1a3941bbd337b8a84e2cd5769188e05a749bc60121ec1de9c0a344" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.098668 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.104549 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92tvk"] Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.132605 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.145018 4853 scope.go:117] "RemoveContainer" containerID="0e08cbca864aa99eaf1c0fe2cb94a110b33fa997ed533f02eadf09970960e98a" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.145083 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6qzqw"] Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.159088 4853 scope.go:117] "RemoveContainer" containerID="53276cfe64870063d2e6e7986c9dc8a80ad523aa39f79680cc4dfca1dc1082ad" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.181942 4853 scope.go:117] "RemoveContainer" containerID="f13a5b42762b8a0245697635d07bdae8ca0cf890317351e82205171a6d1e2950" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.198785 4853 scope.go:117] "RemoveContainer" containerID="497aa9bef308dc40f54f0960af5d954aa541d8b0abc6c1ce497916895ec70697" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.759626 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a448cd-a783-47ee-aeee-080613615f6f" path="/var/lib/kubelet/pods/66a448cd-a783-47ee-aeee-080613615f6f/volumes" Nov 22 07:26:05 crc kubenswrapper[4853]: I1122 07:26:05.760536 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affbbb61-0428-456b-bbec-259deacac8e4" path="/var/lib/kubelet/pods/affbbb61-0428-456b-bbec-259deacac8e4/volumes" Nov 22 07:26:06 crc kubenswrapper[4853]: I1122 07:26:06.086679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" event={"ID":"838479bf-7b77-403c-915a-ed8b62d9c970","Type":"ContainerStarted","Data":"0d750aae7df472df41a62aaf79c33ff5f75d4459e24fef570d3347ae84be710d"} Nov 22 07:26:06 crc kubenswrapper[4853]: I1122 07:26:06.088652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" event={"ID":"f95bfaef-313c-4412-a8ce-ab9e8bd2d244","Type":"ContainerStarted","Data":"73d8101938885f108644ddffd8ae2287df0487aa2ccf13f1b4a9571d5748994e"} Nov 22 07:26:07 crc kubenswrapper[4853]: I1122 07:26:07.096504 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:26:07 crc kubenswrapper[4853]: I1122 07:26:07.098205 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" Nov 22 07:26:07 crc kubenswrapper[4853]: I1122 07:26:07.118641 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-6mnv6" podStartSLOduration=21.612476241 podStartE2EDuration="1m32.118616908s" podCreationTimestamp="2025-11-22 07:24:35 +0000 UTC" firstStartedPulling="2025-11-22 07:24:54.078383452 +0000 UTC m=+892.919006078" lastFinishedPulling="2025-11-22 07:26:04.584524119 +0000 UTC m=+963.425146745" observedRunningTime="2025-11-22 07:26:07.116917862 +0000 UTC m=+965.957540498" watchObservedRunningTime="2025-11-22 07:26:07.118616908 +0000 UTC m=+965.959239534" Nov 22 07:26:07 crc kubenswrapper[4853]: I1122 07:26:07.179947 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p974c" podStartSLOduration=21.965959991 podStartE2EDuration="1m32.179923647s" podCreationTimestamp="2025-11-22 07:24:35 +0000 UTC" firstStartedPulling="2025-11-22 07:24:54.371264412 +0000 UTC m=+893.211887038" lastFinishedPulling="2025-11-22 07:26:04.585228068 +0000 UTC m=+963.425850694" observedRunningTime="2025-11-22 07:26:07.148648426 +0000 UTC m=+965.989271072" watchObservedRunningTime="2025-11-22 07:26:07.179923647 +0000 UTC m=+966.020546273" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.478779 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fzgb6"] Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.479926 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.479949 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.479964 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.479973 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.479990 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.479999 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480014 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480022 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480040 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480048 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480058 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480065 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480074 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480081 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="extract-content" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480092 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480101 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.480109 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480117 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="extract-utilities" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480249 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f14b95f-f300-4d66-bbaa-5d92b3ffe1d0" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480265 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a448cd-a783-47ee-aeee-080613615f6f" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480275 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="affbbb61-0428-456b-bbec-259deacac8e4" containerName="registry-server" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.480891 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.484052 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.484187 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.484332 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-6w4kj" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.489094 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fzgb6"] Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.521672 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f7gf2"] Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.529392 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-k2kt8"] Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.529592 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.530778 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-k2kt8" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.540671 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-zgdr6" Nov 22 07:26:16 crc kubenswrapper[4853]: W1122 07:26:16.540702 4853 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-jlmfc": failed to list *v1.Secret: secrets "cert-manager-dockercfg-jlmfc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "cert-manager": no relationship found between node 'crc' and this object Nov 22 07:26:16 crc kubenswrapper[4853]: E1122 07:26:16.540826 4853 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-jlmfc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-manager-dockercfg-jlmfc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"cert-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.545000 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcmk\" (UniqueName: \"kubernetes.io/projected/edcc88ca-0ffa-4e1a-83b2-97df4f92a493-kube-api-access-tzcmk\") pod \"cert-manager-cainjector-7f985d654d-fzgb6\" (UID: \"edcc88ca-0ffa-4e1a-83b2-97df4f92a493\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.545062 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h524p\" (UniqueName: \"kubernetes.io/projected/accf8a72-f739-4535-b3a9-1303923fe009-kube-api-access-h524p\") pod \"cert-manager-5b446d88c5-k2kt8\" (UID: \"accf8a72-f739-4535-b3a9-1303923fe009\") " pod="cert-manager/cert-manager-5b446d88c5-k2kt8" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.545126 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpbms\" (UniqueName: \"kubernetes.io/projected/8e6fca36-41ef-436e-a917-ed8f248db72f-kube-api-access-cpbms\") pod \"cert-manager-webhook-5655c58dd6-f7gf2\" (UID: \"8e6fca36-41ef-436e-a917-ed8f248db72f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.571122 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-k2kt8"] Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.579335 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f7gf2"] Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.648804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcmk\" (UniqueName: \"kubernetes.io/projected/edcc88ca-0ffa-4e1a-83b2-97df4f92a493-kube-api-access-tzcmk\") pod \"cert-manager-cainjector-7f985d654d-fzgb6\" (UID: \"edcc88ca-0ffa-4e1a-83b2-97df4f92a493\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.648913 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h524p\" (UniqueName: \"kubernetes.io/projected/accf8a72-f739-4535-b3a9-1303923fe009-kube-api-access-h524p\") pod \"cert-manager-5b446d88c5-k2kt8\" (UID: \"accf8a72-f739-4535-b3a9-1303923fe009\") " pod="cert-manager/cert-manager-5b446d88c5-k2kt8" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.649078 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpbms\" (UniqueName: \"kubernetes.io/projected/8e6fca36-41ef-436e-a917-ed8f248db72f-kube-api-access-cpbms\") pod \"cert-manager-webhook-5655c58dd6-f7gf2\" (UID: \"8e6fca36-41ef-436e-a917-ed8f248db72f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.671775 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpbms\" (UniqueName: \"kubernetes.io/projected/8e6fca36-41ef-436e-a917-ed8f248db72f-kube-api-access-cpbms\") pod \"cert-manager-webhook-5655c58dd6-f7gf2\" (UID: \"8e6fca36-41ef-436e-a917-ed8f248db72f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.673570 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcmk\" (UniqueName: \"kubernetes.io/projected/edcc88ca-0ffa-4e1a-83b2-97df4f92a493-kube-api-access-tzcmk\") pod \"cert-manager-cainjector-7f985d654d-fzgb6\" (UID: \"edcc88ca-0ffa-4e1a-83b2-97df4f92a493\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.677132 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h524p\" (UniqueName: \"kubernetes.io/projected/accf8a72-f739-4535-b3a9-1303923fe009-kube-api-access-h524p\") pod \"cert-manager-5b446d88c5-k2kt8\" (UID: \"accf8a72-f739-4535-b3a9-1303923fe009\") " pod="cert-manager/cert-manager-5b446d88c5-k2kt8" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.797554 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" Nov 22 07:26:16 crc kubenswrapper[4853]: I1122 07:26:16.876114 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.101894 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-fzgb6"] Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.165168 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" event={"ID":"edcc88ca-0ffa-4e1a-83b2-97df4f92a493","Type":"ContainerStarted","Data":"95cab0b7d2c8ebe01047fed11d7609aa2875fedc63f5606c906b3dcc552b9ae0"} Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.376012 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-f7gf2"] Nov 22 07:26:17 crc kubenswrapper[4853]: W1122 07:26:17.381867 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6fca36_41ef_436e_a917_ed8f248db72f.slice/crio-7b7390fb6fed8e30b26bdb9a25d7fbfcb82538344cfa39ace262c4f416fc9dc3 WatchSource:0}: Error finding container 7b7390fb6fed8e30b26bdb9a25d7fbfcb82538344cfa39ace262c4f416fc9dc3: Status 404 returned error can't find the container with id 7b7390fb6fed8e30b26bdb9a25d7fbfcb82538344cfa39ace262c4f416fc9dc3 Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.745877 4853 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jlmfc" Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.754011 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-k2kt8" Nov 22 07:26:17 crc kubenswrapper[4853]: I1122 07:26:17.980537 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-k2kt8"] Nov 22 07:26:18 crc kubenswrapper[4853]: I1122 07:26:18.172261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" event={"ID":"8e6fca36-41ef-436e-a917-ed8f248db72f","Type":"ContainerStarted","Data":"7b7390fb6fed8e30b26bdb9a25d7fbfcb82538344cfa39ace262c4f416fc9dc3"} Nov 22 07:26:18 crc kubenswrapper[4853]: I1122 07:26:18.173639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-k2kt8" event={"ID":"accf8a72-f739-4535-b3a9-1303923fe009","Type":"ContainerStarted","Data":"ccf60a5890fcf54a655a1bd1dff58dff40a15bfdb3ddfb5da1e61e556d9fd870"} Nov 22 07:26:31 crc kubenswrapper[4853]: I1122 07:26:31.268690 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" event={"ID":"edcc88ca-0ffa-4e1a-83b2-97df4f92a493","Type":"ContainerStarted","Data":"80e8bdce69174bf5ea764999540fe988814132166f9fef383454d3a34f11b99f"} Nov 22 07:26:31 crc kubenswrapper[4853]: I1122 07:26:31.294185 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-fzgb6" podStartSLOduration=2.259641763 podStartE2EDuration="15.294161161s" podCreationTimestamp="2025-11-22 07:26:16 +0000 UTC" firstStartedPulling="2025-11-22 07:26:17.115325992 +0000 UTC m=+975.955948608" lastFinishedPulling="2025-11-22 07:26:30.14984536 +0000 UTC m=+988.990468006" observedRunningTime="2025-11-22 07:26:31.292356477 +0000 UTC m=+990.132979113" watchObservedRunningTime="2025-11-22 07:26:31.294161161 +0000 UTC m=+990.134783787" Nov 22 07:26:31 crc kubenswrapper[4853]: I1122 07:26:31.297210 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:26:31 crc kubenswrapper[4853]: I1122 07:26:31.297285 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:26:33 crc kubenswrapper[4853]: I1122 07:26:33.283091 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" event={"ID":"8e6fca36-41ef-436e-a917-ed8f248db72f","Type":"ContainerStarted","Data":"59ad59e63f6692c72ba4ff7a0a7761af898d90b5c17bd11dbbd316893c97d89a"} Nov 22 07:26:33 crc kubenswrapper[4853]: I1122 07:26:33.284092 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:26:33 crc kubenswrapper[4853]: I1122 07:26:33.302398 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" podStartSLOduration=1.750922347 podStartE2EDuration="17.302374962s" podCreationTimestamp="2025-11-22 07:26:16 +0000 UTC" firstStartedPulling="2025-11-22 07:26:17.384689303 +0000 UTC m=+976.225311929" lastFinishedPulling="2025-11-22 07:26:32.936141918 +0000 UTC m=+991.776764544" observedRunningTime="2025-11-22 07:26:33.298129416 +0000 UTC m=+992.138752062" watchObservedRunningTime="2025-11-22 07:26:33.302374962 +0000 UTC m=+992.142997588" Nov 22 07:26:34 crc kubenswrapper[4853]: I1122 07:26:34.292200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-k2kt8" event={"ID":"accf8a72-f739-4535-b3a9-1303923fe009","Type":"ContainerStarted","Data":"6e1de6e7c37f01ddc55f442b75a57e982ef480853c0add7a4d18b8e8e48380f3"} Nov 22 07:26:34 crc kubenswrapper[4853]: I1122 07:26:34.310315 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-k2kt8" podStartSLOduration=2.5060045730000002 podStartE2EDuration="18.31028655s" podCreationTimestamp="2025-11-22 07:26:16 +0000 UTC" firstStartedPulling="2025-11-22 07:26:17.989451325 +0000 UTC m=+976.830073971" lastFinishedPulling="2025-11-22 07:26:33.793733322 +0000 UTC m=+992.634355948" observedRunningTime="2025-11-22 07:26:34.305836518 +0000 UTC m=+993.146459154" watchObservedRunningTime="2025-11-22 07:26:34.31028655 +0000 UTC m=+993.150909176" Nov 22 07:26:41 crc kubenswrapper[4853]: I1122 07:26:41.879633 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-f7gf2" Nov 22 07:27:01 crc kubenswrapper[4853]: I1122 07:27:01.297242 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:27:01 crc kubenswrapper[4853]: I1122 07:27:01.297943 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.590613 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc"] Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.593153 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.595446 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.604895 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc"] Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.684256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.684331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.684390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6fm\" (UniqueName: \"kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.786256 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.786381 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.786444 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l6fm\" (UniqueName: \"kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.787065 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.787065 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.814310 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l6fm\" (UniqueName: \"kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:11 crc kubenswrapper[4853]: I1122 07:27:11.913497 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.199414 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc"] Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.379864 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh"] Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.381194 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.394287 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh"] Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.498702 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwksx\" (UniqueName: \"kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.498809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.498852 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.589560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" event={"ID":"2f822997-9c6e-4132-b606-11e336e2f4af","Type":"ContainerStarted","Data":"9fb5135e33ff696e7b0a51ee46744a031fbb32a942f81561ec1648d256513bf7"} Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.600881 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwksx\" (UniqueName: \"kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.600946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.600979 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.601504 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.601997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.625273 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwksx\" (UniqueName: \"kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:12 crc kubenswrapper[4853]: I1122 07:27:12.697693 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:13 crc kubenswrapper[4853]: I1122 07:27:13.128265 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh"] Nov 22 07:27:13 crc kubenswrapper[4853]: W1122 07:27:13.135311 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f6ecf03_88ce_4001_8a9f_2cf202d8d6a0.slice/crio-02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3 WatchSource:0}: Error finding container 02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3: Status 404 returned error can't find the container with id 02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3 Nov 22 07:27:13 crc kubenswrapper[4853]: I1122 07:27:13.597525 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" event={"ID":"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0","Type":"ContainerStarted","Data":"02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3"} Nov 22 07:27:15 crc kubenswrapper[4853]: I1122 07:27:15.617960 4853 generic.go:334] "Generic (PLEG): container finished" podID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerID="455559993e9d5869417f5d6aa72a1d8d53e841ee04f45abad58efb33d7a11b76" exitCode=0 Nov 22 07:27:15 crc kubenswrapper[4853]: I1122 07:27:15.618007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" event={"ID":"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0","Type":"ContainerDied","Data":"455559993e9d5869417f5d6aa72a1d8d53e841ee04f45abad58efb33d7a11b76"} Nov 22 07:27:15 crc kubenswrapper[4853]: I1122 07:27:15.620487 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f822997-9c6e-4132-b606-11e336e2f4af" containerID="8a166b323aba3b9874fc09e5d2bd1d126edb2fb579c8fd4844e41266f61a90e4" exitCode=0 Nov 22 07:27:15 crc kubenswrapper[4853]: I1122 07:27:15.620551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" event={"ID":"2f822997-9c6e-4132-b606-11e336e2f4af","Type":"ContainerDied","Data":"8a166b323aba3b9874fc09e5d2bd1d126edb2fb579c8fd4844e41266f61a90e4"} Nov 22 07:27:27 crc kubenswrapper[4853]: I1122 07:27:27.714635 4853 generic.go:334] "Generic (PLEG): container finished" podID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerID="0eee6680a35f7b0affaa03728d5575009831a05227f45253b1dd0d5d6af1085b" exitCode=0 Nov 22 07:27:27 crc kubenswrapper[4853]: I1122 07:27:27.714764 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" event={"ID":"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0","Type":"ContainerDied","Data":"0eee6680a35f7b0affaa03728d5575009831a05227f45253b1dd0d5d6af1085b"} Nov 22 07:27:27 crc kubenswrapper[4853]: I1122 07:27:27.719438 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f822997-9c6e-4132-b606-11e336e2f4af" containerID="057f12caff0c6ec0682b5f7dc38bd25c35f628c51b1a9292b474f685e8092a12" exitCode=0 Nov 22 07:27:27 crc kubenswrapper[4853]: I1122 07:27:27.719472 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" event={"ID":"2f822997-9c6e-4132-b606-11e336e2f4af","Type":"ContainerDied","Data":"057f12caff0c6ec0682b5f7dc38bd25c35f628c51b1a9292b474f685e8092a12"} Nov 22 07:27:28 crc kubenswrapper[4853]: I1122 07:27:28.730891 4853 generic.go:334] "Generic (PLEG): container finished" podID="2f822997-9c6e-4132-b606-11e336e2f4af" containerID="b2db8cb4823e46ca3e0d6eff33112cc6b7e9031fddf9eef7715e9a50d01c914b" exitCode=0 Nov 22 07:27:28 crc kubenswrapper[4853]: I1122 07:27:28.730977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" event={"ID":"2f822997-9c6e-4132-b606-11e336e2f4af","Type":"ContainerDied","Data":"b2db8cb4823e46ca3e0d6eff33112cc6b7e9031fddf9eef7715e9a50d01c914b"} Nov 22 07:27:28 crc kubenswrapper[4853]: I1122 07:27:28.736781 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" event={"ID":"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0","Type":"ContainerDied","Data":"3281a7ca64f2962afd64a97971f8a02da1771f353ad27e6267e682f1e8941c66"} Nov 22 07:27:28 crc kubenswrapper[4853]: I1122 07:27:28.736730 4853 generic.go:334] "Generic (PLEG): container finished" podID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerID="3281a7ca64f2962afd64a97971f8a02da1771f353ad27e6267e682f1e8941c66" exitCode=0 Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.046342 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.051990 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util\") pod \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198412 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwksx\" (UniqueName: \"kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx\") pod \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util\") pod \"2f822997-9c6e-4132-b606-11e336e2f4af\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle\") pod \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\" (UID: \"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle\") pod \"2f822997-9c6e-4132-b606-11e336e2f4af\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.198558 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l6fm\" (UniqueName: \"kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm\") pod \"2f822997-9c6e-4132-b606-11e336e2f4af\" (UID: \"2f822997-9c6e-4132-b606-11e336e2f4af\") " Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.199707 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle" (OuterVolumeSpecName: "bundle") pod "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" (UID: "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.199947 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.200074 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle" (OuterVolumeSpecName: "bundle") pod "2f822997-9c6e-4132-b606-11e336e2f4af" (UID: "2f822997-9c6e-4132-b606-11e336e2f4af"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.205690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx" (OuterVolumeSpecName: "kube-api-access-vwksx") pod "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" (UID: "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0"). InnerVolumeSpecName "kube-api-access-vwksx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.206004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm" (OuterVolumeSpecName: "kube-api-access-8l6fm") pod "2f822997-9c6e-4132-b606-11e336e2f4af" (UID: "2f822997-9c6e-4132-b606-11e336e2f4af"). InnerVolumeSpecName "kube-api-access-8l6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.209624 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util" (OuterVolumeSpecName: "util") pod "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" (UID: "1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.212440 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util" (OuterVolumeSpecName: "util") pod "2f822997-9c6e-4132-b606-11e336e2f4af" (UID: "2f822997-9c6e-4132-b606-11e336e2f4af"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.301109 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.301172 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwksx\" (UniqueName: \"kubernetes.io/projected/1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0-kube-api-access-vwksx\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.301188 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.301202 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2f822997-9c6e-4132-b606-11e336e2f4af-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.301216 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l6fm\" (UniqueName: \"kubernetes.io/projected/2f822997-9c6e-4132-b606-11e336e2f4af-kube-api-access-8l6fm\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.752986 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.753360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh" event={"ID":"1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0","Type":"ContainerDied","Data":"02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3"} Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.753422 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02d6ddd2cc592ac15f2a91491d04a2bbaa86b26e6038499b33760b76b937a3a3" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.755541 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" event={"ID":"2f822997-9c6e-4132-b606-11e336e2f4af","Type":"ContainerDied","Data":"9fb5135e33ff696e7b0a51ee46744a031fbb32a942f81561ec1648d256513bf7"} Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.755581 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fb5135e33ff696e7b0a51ee46744a031fbb32a942f81561ec1648d256513bf7" Nov 22 07:27:30 crc kubenswrapper[4853]: I1122 07:27:30.755605 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc" Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.297221 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.297321 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.297385 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.298252 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.298792 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8" gracePeriod=600 Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.767357 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8" exitCode=0 Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.767422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8"} Nov 22 07:27:31 crc kubenswrapper[4853]: I1122 07:27:31.767480 4853 scope.go:117] "RemoveContainer" containerID="d536b8e86c6cc6b7e2a4743a840157e3f85808df82d57450ab2cf611ca0528d7" Nov 22 07:27:32 crc kubenswrapper[4853]: I1122 07:27:32.776004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e"} Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.135430 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5"] Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136614 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="pull" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136636 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="pull" Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136647 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136656 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136668 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="util" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136676 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="util" Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136691 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="pull" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136699 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="pull" Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136718 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="util" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136726 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="util" Nov 22 07:27:38 crc kubenswrapper[4853]: E1122 07:27:38.136774 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136783 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136934 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.136955 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f822997-9c6e-4132-b606-11e336e2f4af" containerName="extract" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.137878 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.141147 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.141331 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.142309 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.142355 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.143285 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-f2pmx" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.143682 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.157820 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5"] Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.225876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.225950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/50b94c6e-d5b7-4720-af4c-8922035ca146-manager-config\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.226115 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfj5d\" (UniqueName: \"kubernetes.io/projected/50b94c6e-d5b7-4720-af4c-8922035ca146-kube-api-access-lfj5d\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.226246 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-webhook-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.226311 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-apiservice-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.328308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.328369 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/50b94c6e-d5b7-4720-af4c-8922035ca146-manager-config\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.328426 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfj5d\" (UniqueName: \"kubernetes.io/projected/50b94c6e-d5b7-4720-af4c-8922035ca146-kube-api-access-lfj5d\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.328469 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-webhook-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.328500 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-apiservice-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.329794 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/50b94c6e-d5b7-4720-af4c-8922035ca146-manager-config\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.342545 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-apiservice-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.353595 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.356721 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50b94c6e-d5b7-4720-af4c-8922035ca146-webhook-cert\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.370817 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfj5d\" (UniqueName: \"kubernetes.io/projected/50b94c6e-d5b7-4720-af4c-8922035ca146-kube-api-access-lfj5d\") pod \"loki-operator-controller-manager-5bb8bb4577-rspn5\" (UID: \"50b94c6e-d5b7-4720-af4c-8922035ca146\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:38 crc kubenswrapper[4853]: I1122 07:27:38.470846 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:27:39 crc kubenswrapper[4853]: I1122 07:27:39.074014 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5"] Nov 22 07:27:39 crc kubenswrapper[4853]: I1122 07:27:39.827670 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerStarted","Data":"6da5c47fb33d18568a183c587bab9102d629d74345419b0fc5c75512fffad601"} Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.528891 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-j4wf4"] Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.530801 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.538634 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.539008 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.539174 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-xml6h" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.562688 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-j4wf4"] Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.725833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m298h\" (UniqueName: \"kubernetes.io/projected/fca85b6a-849a-4786-baa9-102f9651efb7-kube-api-access-m298h\") pod \"cluster-logging-operator-ff9846bd-j4wf4\" (UID: \"fca85b6a-849a-4786-baa9-102f9651efb7\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.827172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m298h\" (UniqueName: \"kubernetes.io/projected/fca85b6a-849a-4786-baa9-102f9651efb7-kube-api-access-m298h\") pod \"cluster-logging-operator-ff9846bd-j4wf4\" (UID: \"fca85b6a-849a-4786-baa9-102f9651efb7\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.848215 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m298h\" (UniqueName: \"kubernetes.io/projected/fca85b6a-849a-4786-baa9-102f9651efb7-kube-api-access-m298h\") pod \"cluster-logging-operator-ff9846bd-j4wf4\" (UID: \"fca85b6a-849a-4786-baa9-102f9651efb7\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" Nov 22 07:27:43 crc kubenswrapper[4853]: I1122 07:27:43.860537 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" Nov 22 07:27:45 crc kubenswrapper[4853]: I1122 07:27:45.016809 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-j4wf4"] Nov 22 07:27:45 crc kubenswrapper[4853]: I1122 07:27:45.876606 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" event={"ID":"fca85b6a-849a-4786-baa9-102f9651efb7","Type":"ContainerStarted","Data":"7c07f908129662be9da9b46cc0ae31dfec65ff0fce1fd04684f78a18440212ce"} Nov 22 07:27:45 crc kubenswrapper[4853]: I1122 07:27:45.879182 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerStarted","Data":"b00343fac87512ce675bb259b8a1f1021e60aaaea9286d4b790e5c63858ee976"} Nov 22 07:27:58 crc kubenswrapper[4853]: I1122 07:27:58.998820 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" event={"ID":"fca85b6a-849a-4786-baa9-102f9651efb7","Type":"ContainerStarted","Data":"cc12ed21b6b3fd5664e755523564d9436ca03c9ccc243d37fd4ecbb0f62c052d"} Nov 22 07:28:00 crc kubenswrapper[4853]: I1122 07:28:00.011776 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerStarted","Data":"0377933112cdad036bdbd6c815d4e4b763d819762434c6660b742835addc5344"} Nov 22 07:28:00 crc kubenswrapper[4853]: I1122 07:28:00.012193 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:28:00 crc kubenswrapper[4853]: I1122 07:28:00.014915 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 07:28:00 crc kubenswrapper[4853]: I1122 07:28:00.045796 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" podStartSLOduration=1.796454693 podStartE2EDuration="22.045773963s" podCreationTimestamp="2025-11-22 07:27:38 +0000 UTC" firstStartedPulling="2025-11-22 07:27:39.09093199 +0000 UTC m=+1057.931554616" lastFinishedPulling="2025-11-22 07:27:59.34025126 +0000 UTC m=+1078.180873886" observedRunningTime="2025-11-22 07:28:00.037939153 +0000 UTC m=+1078.878561779" watchObservedRunningTime="2025-11-22 07:28:00.045773963 +0000 UTC m=+1078.886396589" Nov 22 07:28:00 crc kubenswrapper[4853]: I1122 07:28:00.064076 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-j4wf4" podStartSLOduration=3.329672645 podStartE2EDuration="17.064041853s" podCreationTimestamp="2025-11-22 07:27:43 +0000 UTC" firstStartedPulling="2025-11-22 07:27:45.046665883 +0000 UTC m=+1063.887288509" lastFinishedPulling="2025-11-22 07:27:58.781035081 +0000 UTC m=+1077.621657717" observedRunningTime="2025-11-22 07:28:00.059266346 +0000 UTC m=+1078.899888982" watchObservedRunningTime="2025-11-22 07:28:00.064041853 +0000 UTC m=+1078.904664479" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.390898 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.392752 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.395809 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.396118 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.405569 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.488527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phq7\" (UniqueName: \"kubernetes.io/projected/44765b8f-ce88-47bd-9439-f4759896a617-kube-api-access-9phq7\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.488679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.590581 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.590919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phq7\" (UniqueName: \"kubernetes.io/projected/44765b8f-ce88-47bd-9439-f4759896a617-kube-api-access-9phq7\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.595772 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.595864 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6b5b328ef91cea8dce3a07ec4ffc639aa2bc2a6a463cf49f5a2b5b857d3469f6/globalmount\"" pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.615583 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phq7\" (UniqueName: \"kubernetes.io/projected/44765b8f-ce88-47bd-9439-f4759896a617-kube-api-access-9phq7\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.629304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-86174e7a-76e8-46ed-9112-753c2a8f0648\") pod \"minio\" (UID: \"44765b8f-ce88-47bd-9439-f4759896a617\") " pod="minio-dev/minio" Nov 22 07:28:06 crc kubenswrapper[4853]: I1122 07:28:06.714290 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 22 07:28:07 crc kubenswrapper[4853]: I1122 07:28:07.160705 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 22 07:28:08 crc kubenswrapper[4853]: I1122 07:28:08.071029 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"44765b8f-ce88-47bd-9439-f4759896a617","Type":"ContainerStarted","Data":"8e6e54ffe8ee69b76134ffb94873e5b349e167176c0ae4ac1e59118770c83263"} Nov 22 07:28:11 crc kubenswrapper[4853]: I1122 07:28:11.094522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"44765b8f-ce88-47bd-9439-f4759896a617","Type":"ContainerStarted","Data":"21930c45dea7183ac29597d2556763885637f6669bfca6cfb596017ed4fa7753"} Nov 22 07:28:11 crc kubenswrapper[4853]: I1122 07:28:11.124083 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.403532021 podStartE2EDuration="8.124057241s" podCreationTimestamp="2025-11-22 07:28:03 +0000 UTC" firstStartedPulling="2025-11-22 07:28:07.171250011 +0000 UTC m=+1086.011872637" lastFinishedPulling="2025-11-22 07:28:10.891775231 +0000 UTC m=+1089.732397857" observedRunningTime="2025-11-22 07:28:11.119457068 +0000 UTC m=+1089.960079694" watchObservedRunningTime="2025-11-22 07:28:11.124057241 +0000 UTC m=+1089.964679867" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.831945 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g"] Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.833793 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.843140 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.843570 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-ggdzh" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.843779 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.844183 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.844303 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.857171 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g"] Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.932994 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.933063 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-config\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.933127 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbk22\" (UniqueName: \"kubernetes.io/projected/2eba451f-bb08-4f70-ad59-aa64a216f265-kube-api-access-lbk22\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.933158 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:14 crc kubenswrapper[4853]: I1122 07:28:14.933411 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.013517 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-rs9nq"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.014488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.024907 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.027210 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.028847 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035705 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-config\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035739 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035794 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035887 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-config\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035961 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbk22\" (UniqueName: \"kubernetes.io/projected/2eba451f-bb08-4f70-ad59-aa64a216f265-kube-api-access-lbk22\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.035983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.036000 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr25k\" (UniqueName: \"kubernetes.io/projected/13c77518-7ced-4c03-a300-d00ec52fa068-kube-api-access-fr25k\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.037036 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.038346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eba451f-bb08-4f70-ad59-aa64a216f265-config\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.048364 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.053951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/2eba451f-bb08-4f70-ad59-aa64a216f265-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.058682 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-rs9nq"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.100841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbk22\" (UniqueName: \"kubernetes.io/projected/2eba451f-bb08-4f70-ad59-aa64a216f265-kube-api-access-lbk22\") pod \"logging-loki-distributor-76cc67bf56-bfp4g\" (UID: \"2eba451f-bb08-4f70-ad59-aa64a216f265\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.113074 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.114043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.118838 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.119005 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.127014 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.137983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr25k\" (UniqueName: \"kubernetes.io/projected/13c77518-7ced-4c03-a300-d00ec52fa068-kube-api-access-fr25k\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.138035 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.138084 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-config\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.138110 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.138135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.138167 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.141697 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.142003 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.142388 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c77518-7ced-4c03-a300-d00ec52fa068-config\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.165306 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.165412 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.165444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/13c77518-7ced-4c03-a300-d00ec52fa068-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.208892 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr25k\" (UniqueName: \"kubernetes.io/projected/13c77518-7ced-4c03-a300-d00ec52fa068-kube-api-access-fr25k\") pod \"logging-loki-querier-5895d59bb8-rs9nq\" (UID: \"13c77518-7ced-4c03-a300-d00ec52fa068\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.235812 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-l8bwp"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.238234 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.241912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.243196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-config\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.243242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgxl\" (UniqueName: \"kubernetes.io/projected/b5a31aa4-7663-4b33-9a60-9b7bb676419a-kube-api-access-tfgxl\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.243290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.243356 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.243376 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.249702 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-l8bwp"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.253353 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-k8ch7" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.253581 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.253738 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.253868 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.254017 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.265111 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-ndqqx"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.266600 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.288114 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-ndqqx"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.333020 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tenants\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344678 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-rbac\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-config\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgxl\" (UniqueName: \"kubernetes.io/projected/b5a31aa4-7663-4b33-9a60-9b7bb676419a-kube-api-access-tfgxl\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344859 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344936 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344961 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlx68\" (UniqueName: \"kubernetes.io/projected/5729c668-8833-48b4-9e48-bcf753621ff7-kube-api-access-jlx68\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.344979 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.345039 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.345059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.348257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-config\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.348438 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.355312 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.359430 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/b5a31aa4-7663-4b33-9a60-9b7bb676419a-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.377172 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgxl\" (UniqueName: \"kubernetes.io/projected/b5a31aa4-7663-4b33-9a60-9b7bb676419a-kube-api-access-tfgxl\") pod \"logging-loki-query-frontend-84558f7c9f-fq9dp\" (UID: \"b5a31aa4-7663-4b33-9a60-9b7bb676419a\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.446541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447016 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447044 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-rbac\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447087 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tenants\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447129 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcr5\" (UniqueName: \"kubernetes.io/projected/f680143e-738e-4726-bd8e-1f14bf3f4eaa-kube-api-access-6bcr5\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447209 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-rbac\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447604 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447684 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447795 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tenants\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447915 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.447995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlx68\" (UniqueName: \"kubernetes.io/projected/5729c668-8833-48b4-9e48-bcf753621ff7-kube-api-access-jlx68\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.448032 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.448063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: E1122 07:28:15.448325 4853 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 22 07:28:15 crc kubenswrapper[4853]: E1122 07:28:15.448394 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret podName:5729c668-8833-48b4-9e48-bcf753621ff7 nodeName:}" failed. No retries permitted until 2025-11-22 07:28:15.948371307 +0000 UTC m=+1094.788993933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret") pod "logging-loki-gateway-76bd965446-l8bwp" (UID: "5729c668-8833-48b4-9e48-bcf753621ff7") : secret "logging-loki-gateway-http" not found Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.448604 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.448662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.449228 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.449305 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/5729c668-8833-48b4-9e48-bcf753621ff7-rbac\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.453149 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.457161 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tenants\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.475821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlx68\" (UniqueName: \"kubernetes.io/projected/5729c668-8833-48b4-9e48-bcf753621ff7-kube-api-access-jlx68\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.517059 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550164 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550208 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-rbac\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcr5\" (UniqueName: \"kubernetes.io/projected/f680143e-738e-4726-bd8e-1f14bf3f4eaa-kube-api-access-6bcr5\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550360 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tenants\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.550438 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: E1122 07:28:15.550579 4853 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 22 07:28:15 crc kubenswrapper[4853]: E1122 07:28:15.550631 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret podName:f680143e-738e-4726-bd8e-1f14bf3f4eaa nodeName:}" failed. No retries permitted until 2025-11-22 07:28:16.05061668 +0000 UTC m=+1094.891239306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret") pod "logging-loki-gateway-76bd965446-ndqqx" (UID: "f680143e-738e-4726-bd8e-1f14bf3f4eaa") : secret "logging-loki-gateway-http" not found Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.551580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.552160 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-rbac\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.552721 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.554807 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f680143e-738e-4726-bd8e-1f14bf3f4eaa-lokistack-gateway\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.556395 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.557740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tenants\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.575008 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcr5\" (UniqueName: \"kubernetes.io/projected/f680143e-738e-4726-bd8e-1f14bf3f4eaa-kube-api-access-6bcr5\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.786563 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.942415 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-rs9nq"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.974018 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.978969 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.982829 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.983021 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/5729c668-8833-48b4-9e48-bcf753621ff7-tls-secret\") pod \"logging-loki-gateway-76bd965446-l8bwp\" (UID: \"5729c668-8833-48b4-9e48-bcf753621ff7\") " pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:15 crc kubenswrapper[4853]: I1122 07:28:15.991111 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.004055 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.015897 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.075684 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.086385 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f680143e-738e-4726-bd8e-1f14bf3f4eaa-tls-secret\") pod \"logging-loki-gateway-76bd965446-ndqqx\" (UID: \"f680143e-738e-4726-bd8e-1f14bf3f4eaa\") " pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.129201 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.130513 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.133477 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.133732 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.137560 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.153499 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" event={"ID":"13c77518-7ced-4c03-a300-d00ec52fa068","Type":"ContainerStarted","Data":"517e61e0f0dc458fa926e6cb73b89b1fae46a9fc201acc400e2b67b0bf78e01f"} Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.158598 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" event={"ID":"2eba451f-bb08-4f70-ad59-aa64a216f265","Type":"ContainerStarted","Data":"d7327d11ed0ecd799fbc6cd936ca9b69d97c71fa003f5bfabac72bc56c82af6f"} Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.162513 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189172 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw9tz\" (UniqueName: \"kubernetes.io/projected/f6d37108-c1bc-4250-ba04-924fe0dabff3-kube-api-access-pw9tz\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189196 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189230 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189263 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189339 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.189357 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-config\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.191553 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-k8ch7" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.198359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.202334 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.221735 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.223143 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.229788 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.229993 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.254559 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292482 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292540 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292572 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-config\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292628 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292644 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292711 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl92k\" (UniqueName: \"kubernetes.io/projected/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-kube-api-access-sl92k\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292737 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292805 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292903 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292947 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw9tz\" (UniqueName: \"kubernetes.io/projected/f6d37108-c1bc-4250-ba04-924fe0dabff3-kube-api-access-pw9tz\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.292996 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.293018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-config\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.293040 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.293065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.293083 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4bf\" (UniqueName: \"kubernetes.io/projected/e7f66807-e021-4fa5-bf10-9b2107788d3d-kube-api-access-jh4bf\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.296263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-config\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.298266 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.305035 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.305086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.307659 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/f6d37108-c1bc-4250-ba04-924fe0dabff3-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.309802 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.309844 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/06c31d2f02ae909178585b1eb748db6968b16ae9f9394a2b825099da113034c5/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.310216 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.310274 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a921b79e8724f7216f35f2ce7921c212d49d74b1c9b0af8a19a326360b0342b0/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.331916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw9tz\" (UniqueName: \"kubernetes.io/projected/f6d37108-c1bc-4250-ba04-924fe0dabff3-kube-api-access-pw9tz\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.349952 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7913ea5f-6d0a-4464-886d-eefde0bb8bf0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.351676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e251bdab-9363-49af-a1bf-d39071e8b7d0\") pod \"logging-loki-ingester-0\" (UID: \"f6d37108-c1bc-4250-ba04-924fe0dabff3\") " pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409312 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl92k\" (UniqueName: \"kubernetes.io/projected/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-kube-api-access-sl92k\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409412 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409447 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409480 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409556 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-config\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409596 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409621 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4bf\" (UniqueName: \"kubernetes.io/projected/e7f66807-e021-4fa5-bf10-9b2107788d3d-kube-api-access-jh4bf\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.409821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.410898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.412961 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-config\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.413265 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7f66807-e021-4fa5-bf10-9b2107788d3d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.417051 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.418832 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.418870 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d26226a1d1c96981b47f2d7d852569d84b23d098854bb59dcfb5b750b89f527f/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.419021 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.419044 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6638472f1b33c773e10e0f021f109fbb40348de3292d4fb91dc13406071019d6/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.419848 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.420134 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.420334 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.422515 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/e7f66807-e021-4fa5-bf10-9b2107788d3d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.422653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.431799 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4bf\" (UniqueName: \"kubernetes.io/projected/e7f66807-e021-4fa5-bf10-9b2107788d3d-kube-api-access-jh4bf\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.434694 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl92k\" (UniqueName: \"kubernetes.io/projected/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-kube-api-access-sl92k\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.441066 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/b876ce6f-c01a-4d2e-813c-abb6fff4a4e2-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.458840 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c598f727-d5e1-43b6-b8b8-bd9f49afaa82\") pod \"logging-loki-compactor-0\" (UID: \"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2\") " pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.469264 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f070891-b9b7-45d9-9905-8d0e59b12d24\") pod \"logging-loki-index-gateway-0\" (UID: \"e7f66807-e021-4fa5-bf10-9b2107788d3d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.499994 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.516854 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-ndqqx"] Nov 22 07:28:16 crc kubenswrapper[4853]: W1122 07:28:16.530206 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf680143e_738e_4726_bd8e_1f14bf3f4eaa.slice/crio-0e6bfb5944c08a3a8ae93055aafb35ba1e7b521fd36c8f621fc3825b148c2e92 WatchSource:0}: Error finding container 0e6bfb5944c08a3a8ae93055aafb35ba1e7b521fd36c8f621fc3825b148c2e92: Status 404 returned error can't find the container with id 0e6bfb5944c08a3a8ae93055aafb35ba1e7b521fd36c8f621fc3825b148c2e92 Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.561996 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76bd965446-l8bwp"] Nov 22 07:28:16 crc kubenswrapper[4853]: W1122 07:28:16.568400 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5729c668_8833_48b4_9e48_bcf753621ff7.slice/crio-dc0be782730c020532edab39c2a93fbe81f217e23772190a30b19b559ad4f7df WatchSource:0}: Error finding container dc0be782730c020532edab39c2a93fbe81f217e23772190a30b19b559ad4f7df: Status 404 returned error can't find the container with id dc0be782730c020532edab39c2a93fbe81f217e23772190a30b19b559ad4f7df Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.590210 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.635740 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.749918 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: I1122 07:28:16.906156 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 22 07:28:16 crc kubenswrapper[4853]: W1122 07:28:16.910466 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6d37108_c1bc_4250_ba04_924fe0dabff3.slice/crio-a82684b512d14a69e8a8eab1645c6232c765a3611b4b172f19c687c1920ffbd3 WatchSource:0}: Error finding container a82684b512d14a69e8a8eab1645c6232c765a3611b4b172f19c687c1920ffbd3: Status 404 returned error can't find the container with id a82684b512d14a69e8a8eab1645c6232c765a3611b4b172f19c687c1920ffbd3 Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.057402 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 22 07:28:17 crc kubenswrapper[4853]: W1122 07:28:17.065410 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f66807_e021_4fa5_bf10_9b2107788d3d.slice/crio-85dd255693182063968757cc56bb65ed3994585a5c4dd92597b544f5a65d5980 WatchSource:0}: Error finding container 85dd255693182063968757cc56bb65ed3994585a5c4dd92597b544f5a65d5980: Status 404 returned error can't find the container with id 85dd255693182063968757cc56bb65ed3994585a5c4dd92597b544f5a65d5980 Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.172831 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f6d37108-c1bc-4250-ba04-924fe0dabff3","Type":"ContainerStarted","Data":"a82684b512d14a69e8a8eab1645c6232c765a3611b4b172f19c687c1920ffbd3"} Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.175104 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2","Type":"ContainerStarted","Data":"8feee723679b9366678c248dc15430ba49925e7758683f08e883a53a1e3bc4bd"} Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.176658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" event={"ID":"5729c668-8833-48b4-9e48-bcf753621ff7","Type":"ContainerStarted","Data":"dc0be782730c020532edab39c2a93fbe81f217e23772190a30b19b559ad4f7df"} Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.177703 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"e7f66807-e021-4fa5-bf10-9b2107788d3d","Type":"ContainerStarted","Data":"85dd255693182063968757cc56bb65ed3994585a5c4dd92597b544f5a65d5980"} Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.180995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" event={"ID":"f680143e-738e-4726-bd8e-1f14bf3f4eaa","Type":"ContainerStarted","Data":"0e6bfb5944c08a3a8ae93055aafb35ba1e7b521fd36c8f621fc3825b148c2e92"} Nov 22 07:28:17 crc kubenswrapper[4853]: I1122 07:28:17.183847 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" event={"ID":"b5a31aa4-7663-4b33-9a60-9b7bb676419a","Type":"ContainerStarted","Data":"4b5ba468e868e147c29b0ec8b41cee48261d281b808a2774e4cfe14a89821d31"} Nov 22 07:28:31 crc kubenswrapper[4853]: I1122 07:28:31.307262 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" event={"ID":"13c77518-7ced-4c03-a300-d00ec52fa068","Type":"ContainerStarted","Data":"793953e3a1a97ce8ece95d63ee5807dcd21cb0e54f420bf490996f4630b27cf8"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.320297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" event={"ID":"f680143e-738e-4726-bd8e-1f14bf3f4eaa","Type":"ContainerStarted","Data":"9ccb8e10bcda47dda9cbe12fdd2e2384aaad12722a22004a6e766522600317a7"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.322512 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" event={"ID":"b5a31aa4-7663-4b33-9a60-9b7bb676419a","Type":"ContainerStarted","Data":"87167094ab205867f67c7602101abdbfa8566bff43ad5a7467b82143cf416489"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.322652 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.324360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"f6d37108-c1bc-4250-ba04-924fe0dabff3","Type":"ContainerStarted","Data":"f4507a7b0d7a0f557370bcf99963787f67f9dd5cae992c28886ca568aaaeddf8"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.324429 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.327288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"b876ce6f-c01a-4d2e-813c-abb6fff4a4e2","Type":"ContainerStarted","Data":"8c3648ca26004268fc301d387cd6639060287c615fd51128450f649a95d9a651"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.328012 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.329668 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" event={"ID":"2eba451f-bb08-4f70-ad59-aa64a216f265","Type":"ContainerStarted","Data":"c19ac6ea537f88a0d236314ed9156649cead945468b65a95bb30792e14f76e0b"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.331375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" event={"ID":"5729c668-8833-48b4-9e48-bcf753621ff7","Type":"ContainerStarted","Data":"a17f76cd91accaa346f9d52dc97ed4290c3baa24df977f2b3bc908d32ee74bf0"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.334197 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"e7f66807-e021-4fa5-bf10-9b2107788d3d","Type":"ContainerStarted","Data":"2159d944b65eea21f0f0e93952d4bd9755c2466330f355215ac97ec6e3817c34"} Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.334231 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.334327 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.346078 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" podStartSLOduration=3.996939065 podStartE2EDuration="17.346052116s" podCreationTimestamp="2025-11-22 07:28:15 +0000 UTC" firstStartedPulling="2025-11-22 07:28:16.156208362 +0000 UTC m=+1094.996830988" lastFinishedPulling="2025-11-22 07:28:29.505321413 +0000 UTC m=+1108.345944039" observedRunningTime="2025-11-22 07:28:32.344816262 +0000 UTC m=+1111.185438898" watchObservedRunningTime="2025-11-22 07:28:32.346052116 +0000 UTC m=+1111.186674762" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.362804 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" podStartSLOduration=8.110222814 podStartE2EDuration="18.36278009s" podCreationTimestamp="2025-11-22 07:28:14 +0000 UTC" firstStartedPulling="2025-11-22 07:28:15.947454833 +0000 UTC m=+1094.788077459" lastFinishedPulling="2025-11-22 07:28:26.200012109 +0000 UTC m=+1105.040634735" observedRunningTime="2025-11-22 07:28:32.359950213 +0000 UTC m=+1111.200572859" watchObservedRunningTime="2025-11-22 07:28:32.36278009 +0000 UTC m=+1111.203402706" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.382759 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=7.511782508 podStartE2EDuration="17.382722112s" podCreationTimestamp="2025-11-22 07:28:15 +0000 UTC" firstStartedPulling="2025-11-22 07:28:16.766770428 +0000 UTC m=+1095.607393054" lastFinishedPulling="2025-11-22 07:28:26.637710032 +0000 UTC m=+1105.478332658" observedRunningTime="2025-11-22 07:28:32.379683978 +0000 UTC m=+1111.220306604" watchObservedRunningTime="2025-11-22 07:28:32.382722112 +0000 UTC m=+1111.223344738" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.404508 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=5.378525505 podStartE2EDuration="17.404477381s" podCreationTimestamp="2025-11-22 07:28:15 +0000 UTC" firstStartedPulling="2025-11-22 07:28:17.068349018 +0000 UTC m=+1095.908971684" lastFinishedPulling="2025-11-22 07:28:29.094300934 +0000 UTC m=+1107.934923560" observedRunningTime="2025-11-22 07:28:32.399147866 +0000 UTC m=+1111.239770502" watchObservedRunningTime="2025-11-22 07:28:32.404477381 +0000 UTC m=+1111.245100007" Nov 22 07:28:32 crc kubenswrapper[4853]: I1122 07:28:32.421497 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" podStartSLOduration=5.149578487 podStartE2EDuration="18.421472652s" podCreationTimestamp="2025-11-22 07:28:14 +0000 UTC" firstStartedPulling="2025-11-22 07:28:15.823052176 +0000 UTC m=+1094.663674802" lastFinishedPulling="2025-11-22 07:28:29.094946341 +0000 UTC m=+1107.935568967" observedRunningTime="2025-11-22 07:28:32.416856957 +0000 UTC m=+1111.257479593" watchObservedRunningTime="2025-11-22 07:28:32.421472652 +0000 UTC m=+1111.262095278" Nov 22 07:28:33 crc kubenswrapper[4853]: I1122 07:28:33.343397 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.359892 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" event={"ID":"f680143e-738e-4726-bd8e-1f14bf3f4eaa","Type":"ContainerStarted","Data":"d809c0cb2dc8492fbbd3ff2fc644fd4d349271b8276e4ff88f04b1b1407757f8"} Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.360302 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.360506 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.363692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" event={"ID":"5729c668-8833-48b4-9e48-bcf753621ff7","Type":"ContainerStarted","Data":"70f7c356adfff129aaae591676ae3b407ca7c969ce34031860d484f76f7ee0a5"} Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.364041 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.388817 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=9.207421495 podStartE2EDuration="21.388794519s" podCreationTimestamp="2025-11-22 07:28:14 +0000 UTC" firstStartedPulling="2025-11-22 07:28:16.912980491 +0000 UTC m=+1095.753603117" lastFinishedPulling="2025-11-22 07:28:29.094353505 +0000 UTC m=+1107.934976141" observedRunningTime="2025-11-22 07:28:32.450237363 +0000 UTC m=+1111.290860009" watchObservedRunningTime="2025-11-22 07:28:35.388794519 +0000 UTC m=+1114.229417145" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.393392 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" podStartSLOduration=2.298188307 podStartE2EDuration="20.393374454s" podCreationTimestamp="2025-11-22 07:28:15 +0000 UTC" firstStartedPulling="2025-11-22 07:28:16.53305138 +0000 UTC m=+1095.373674006" lastFinishedPulling="2025-11-22 07:28:34.628237507 +0000 UTC m=+1113.468860153" observedRunningTime="2025-11-22 07:28:35.387613047 +0000 UTC m=+1114.228235663" watchObservedRunningTime="2025-11-22 07:28:35.393374454 +0000 UTC m=+1114.233997080" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.396545 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.398110 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.400388 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76bd965446-ndqqx" Nov 22 07:28:35 crc kubenswrapper[4853]: I1122 07:28:35.415851 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" podStartSLOduration=2.367644819 podStartE2EDuration="20.415826873s" podCreationTimestamp="2025-11-22 07:28:15 +0000 UTC" firstStartedPulling="2025-11-22 07:28:16.573898586 +0000 UTC m=+1095.414521202" lastFinishedPulling="2025-11-22 07:28:34.62208062 +0000 UTC m=+1113.462703256" observedRunningTime="2025-11-22 07:28:35.41128679 +0000 UTC m=+1114.251909436" watchObservedRunningTime="2025-11-22 07:28:35.415826873 +0000 UTC m=+1114.256449499" Nov 22 07:28:36 crc kubenswrapper[4853]: I1122 07:28:36.199480 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:36 crc kubenswrapper[4853]: I1122 07:28:36.213195 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" Nov 22 07:28:46 crc kubenswrapper[4853]: I1122 07:28:46.512884 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 22 07:28:46 crc kubenswrapper[4853]: I1122 07:28:46.600208 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 22 07:28:46 crc kubenswrapper[4853]: I1122 07:28:46.657903 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 22 07:28:46 crc kubenswrapper[4853]: I1122 07:28:46.657967 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f6d37108-c1bc-4250-ba04-924fe0dabff3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:28:55 crc kubenswrapper[4853]: I1122 07:28:55.170403 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-bfp4g" Nov 22 07:28:55 crc kubenswrapper[4853]: I1122 07:28:55.341924 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-rs9nq" Nov 22 07:28:55 crc kubenswrapper[4853]: I1122 07:28:55.524940 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-fq9dp" Nov 22 07:28:56 crc kubenswrapper[4853]: I1122 07:28:56.644465 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 22 07:28:56 crc kubenswrapper[4853]: I1122 07:28:56.645043 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f6d37108-c1bc-4250-ba04-924fe0dabff3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:29:06 crc kubenswrapper[4853]: I1122 07:29:06.643247 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 22 07:29:06 crc kubenswrapper[4853]: I1122 07:29:06.644337 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f6d37108-c1bc-4250-ba04-924fe0dabff3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:29:16 crc kubenswrapper[4853]: I1122 07:29:16.642556 4853 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 22 07:29:16 crc kubenswrapper[4853]: I1122 07:29:16.643297 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="f6d37108-c1bc-4250-ba04-924fe0dabff3" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:29:26 crc kubenswrapper[4853]: I1122 07:29:26.644643 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.611316 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-sf4h6"] Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.613576 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.617938 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.621438 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.621835 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.622616 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.622738 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-f4f6z" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.630292 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-sf4h6"] Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.640443 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.709738 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.709880 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.709906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.709921 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.709964 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.710359 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.710488 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.710640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.710863 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.710942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2c4k\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.711074 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.778474 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-sf4h6"] Nov 22 07:29:36 crc kubenswrapper[4853]: E1122 07:29:36.779224 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-x2c4k metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-sf4h6" podUID="d3fc9986-1b90-49c8-aa42-c36074f34dac" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812511 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812676 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: E1122 07:29:36.812808 4853 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2c4k\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.812876 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: E1122 07:29:36.812905 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver podName:d3fc9986-1b90-49c8-aa42-c36074f34dac nodeName:}" failed. No retries permitted until 2025-11-22 07:29:37.312881948 +0000 UTC m=+1176.153504574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver") pod "collector-sf4h6" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac") : secret "collector-syslog-receiver" not found Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.813015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.813671 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: E1122 07:29:36.813978 4853 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Nov 22 07:29:36 crc kubenswrapper[4853]: E1122 07:29:36.814118 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics podName:d3fc9986-1b90-49c8-aa42-c36074f34dac nodeName:}" failed. No retries permitted until 2025-11-22 07:29:37.314081721 +0000 UTC m=+1176.154704517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics") pod "collector-sf4h6" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac") : secret "collector-metrics" not found Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.814158 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.814545 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.814661 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.819622 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.822554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.844795 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2c4k\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.844874 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.868205 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-sf4h6" Nov 22 07:29:36 crc kubenswrapper[4853]: I1122 07:29:36.891461 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.016663 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.016803 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.016847 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.016890 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.016931 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017004 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir" (OuterVolumeSpecName: "datadir") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017083 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017102 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2c4k\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017165 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017691 4853 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3fc9986-1b90-49c8-aa42-c36074f34dac-datadir\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017883 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017905 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.017953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config" (OuterVolumeSpecName: "config") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.018301 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.020991 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token" (OuterVolumeSpecName: "collector-token") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.020992 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp" (OuterVolumeSpecName: "tmp") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.023511 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token" (OuterVolumeSpecName: "sa-token") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.033376 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k" (OuterVolumeSpecName: "kube-api-access-x2c4k") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "kube-api-access-x2c4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.119931 4853 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120176 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120189 4853 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120200 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120214 4853 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3fc9986-1b90-49c8-aa42-c36074f34dac-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120229 4853 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120243 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2c4k\" (UniqueName: \"kubernetes.io/projected/d3fc9986-1b90-49c8-aa42-c36074f34dac-kube-api-access-x2c4k\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.120257 4853 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3fc9986-1b90-49c8-aa42-c36074f34dac-tmp\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.323373 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.323470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.328969 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.329339 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") pod \"collector-sf4h6\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.424732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.425314 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") pod \"d3fc9986-1b90-49c8-aa42-c36074f34dac\" (UID: \"d3fc9986-1b90-49c8-aa42-c36074f34dac\") " Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.429091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.430678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics" (OuterVolumeSpecName: "metrics") pod "d3fc9986-1b90-49c8-aa42-c36074f34dac" (UID: "d3fc9986-1b90-49c8-aa42-c36074f34dac"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.528264 4853 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.528674 4853 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3fc9986-1b90-49c8-aa42-c36074f34dac-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.877017 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-sf4h6" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.937785 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ls955"] Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.939024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ls955" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.942806 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.943099 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.943587 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-f4f6z" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.943943 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.946356 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.951180 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-sf4h6"] Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.953495 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.961619 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-sf4h6"] Nov 22 07:29:37 crc kubenswrapper[4853]: I1122 07:29:37.997197 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ls955"] Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-entrypoint\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038606 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlhv\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-kube-api-access-ptlhv\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038671 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-tmp\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038700 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-metrics\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config-openshift-service-cacrt\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038761 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-datadir\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038779 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-sa-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038812 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-trusted-ca\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038836 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.038865 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-syslog-receiver\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.140890 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-metrics\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.140984 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config-openshift-service-cacrt\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141025 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-datadir\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141048 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-sa-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141095 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-trusted-ca\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141129 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141168 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-syslog-receiver\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141203 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-entrypoint\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141209 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-datadir\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141231 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlhv\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-kube-api-access-ptlhv\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141349 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.141612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-tmp\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.142360 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config-openshift-service-cacrt\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.142579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-config\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.143710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-entrypoint\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.144095 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-trusted-ca\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.150877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-syslog-receiver\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.160448 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-tmp\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.162629 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-collector-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.163165 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-metrics\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.166168 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-sa-token\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.172520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlhv\" (UniqueName: \"kubernetes.io/projected/69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd-kube-api-access-ptlhv\") pod \"collector-ls955\" (UID: \"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd\") " pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.277933 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ls955" Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.636589 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ls955"] Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.653004 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:29:38 crc kubenswrapper[4853]: I1122 07:29:38.889350 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ls955" event={"ID":"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd","Type":"ContainerStarted","Data":"7f232763bd969d8de959939ba8c556f1b0536530d1a36b059ebf7c8089db822e"} Nov 22 07:29:39 crc kubenswrapper[4853]: I1122 07:29:39.759212 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fc9986-1b90-49c8-aa42-c36074f34dac" path="/var/lib/kubelet/pods/d3fc9986-1b90-49c8-aa42-c36074f34dac/volumes" Nov 22 07:29:54 crc kubenswrapper[4853]: I1122 07:29:54.032333 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ls955" event={"ID":"69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd","Type":"ContainerStarted","Data":"7f00f0228cc05937aacda264892fb91a5bde05c815e3fff3a6f6f2f0bb7697c1"} Nov 22 07:29:54 crc kubenswrapper[4853]: I1122 07:29:54.057861 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ls955" podStartSLOduration=2.637150305 podStartE2EDuration="17.057838235s" podCreationTimestamp="2025-11-22 07:29:37 +0000 UTC" firstStartedPulling="2025-11-22 07:29:38.652716249 +0000 UTC m=+1177.493338875" lastFinishedPulling="2025-11-22 07:29:53.073404179 +0000 UTC m=+1191.914026805" observedRunningTime="2025-11-22 07:29:54.057600528 +0000 UTC m=+1192.898223164" watchObservedRunningTime="2025-11-22 07:29:54.057838235 +0000 UTC m=+1192.898460861" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.154511 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm"] Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.156723 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.166045 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.166107 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.169285 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm"] Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.304610 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.304704 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96rbc\" (UniqueName: \"kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.304907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.407326 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.407471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.407556 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96rbc\" (UniqueName: \"kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.408828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.413566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.439081 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96rbc\" (UniqueName: \"kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc\") pod \"collect-profiles-29396610-99mxm\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.479470 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:00 crc kubenswrapper[4853]: I1122 07:30:00.945378 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm"] Nov 22 07:30:01 crc kubenswrapper[4853]: I1122 07:30:01.084701 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" event={"ID":"837e951f-77bd-402e-b8b0-3cb6bc2f2e03","Type":"ContainerStarted","Data":"d2aff6f1ce99dd824802b05373fe76f8645d7d62255e0d2907f8e4212f5cfe3c"} Nov 22 07:30:01 crc kubenswrapper[4853]: I1122 07:30:01.298167 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:30:01 crc kubenswrapper[4853]: I1122 07:30:01.298263 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:02 crc kubenswrapper[4853]: I1122 07:30:02.096022 4853 generic.go:334] "Generic (PLEG): container finished" podID="837e951f-77bd-402e-b8b0-3cb6bc2f2e03" containerID="72db45160eeff40ee43a2752e399b2b8a4f122a26ddd84be7500a32483f6ae15" exitCode=0 Nov 22 07:30:02 crc kubenswrapper[4853]: I1122 07:30:02.096111 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" event={"ID":"837e951f-77bd-402e-b8b0-3cb6bc2f2e03","Type":"ContainerDied","Data":"72db45160eeff40ee43a2752e399b2b8a4f122a26ddd84be7500a32483f6ae15"} Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.399034 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.572388 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume\") pod \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.572573 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96rbc\" (UniqueName: \"kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc\") pod \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.572603 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume\") pod \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\" (UID: \"837e951f-77bd-402e-b8b0-3cb6bc2f2e03\") " Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.573547 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume" (OuterVolumeSpecName: "config-volume") pod "837e951f-77bd-402e-b8b0-3cb6bc2f2e03" (UID: "837e951f-77bd-402e-b8b0-3cb6bc2f2e03"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.578206 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "837e951f-77bd-402e-b8b0-3cb6bc2f2e03" (UID: "837e951f-77bd-402e-b8b0-3cb6bc2f2e03"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.578620 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc" (OuterVolumeSpecName: "kube-api-access-96rbc") pod "837e951f-77bd-402e-b8b0-3cb6bc2f2e03" (UID: "837e951f-77bd-402e-b8b0-3cb6bc2f2e03"). InnerVolumeSpecName "kube-api-access-96rbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.675384 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96rbc\" (UniqueName: \"kubernetes.io/projected/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-kube-api-access-96rbc\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.675426 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:03 crc kubenswrapper[4853]: I1122 07:30:03.675437 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837e951f-77bd-402e-b8b0-3cb6bc2f2e03-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4853]: I1122 07:30:04.115803 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" event={"ID":"837e951f-77bd-402e-b8b0-3cb6bc2f2e03","Type":"ContainerDied","Data":"d2aff6f1ce99dd824802b05373fe76f8645d7d62255e0d2907f8e4212f5cfe3c"} Nov 22 07:30:04 crc kubenswrapper[4853]: I1122 07:30:04.115895 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2aff6f1ce99dd824802b05373fe76f8645d7d62255e0d2907f8e4212f5cfe3c" Nov 22 07:30:04 crc kubenswrapper[4853]: I1122 07:30:04.115925 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.879772 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5"] Nov 22 07:30:21 crc kubenswrapper[4853]: E1122 07:30:21.880637 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837e951f-77bd-402e-b8b0-3cb6bc2f2e03" containerName="collect-profiles" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.880650 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="837e951f-77bd-402e-b8b0-3cb6bc2f2e03" containerName="collect-profiles" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.880838 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="837e951f-77bd-402e-b8b0-3cb6bc2f2e03" containerName="collect-profiles" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.882120 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.885662 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.892574 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5"] Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.933857 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9cvb\" (UniqueName: \"kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.933931 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:21 crc kubenswrapper[4853]: I1122 07:30:21.933957 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.036201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9cvb\" (UniqueName: \"kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.036380 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.036429 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.037347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.037352 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.062416 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9cvb\" (UniqueName: \"kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.203002 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:22 crc kubenswrapper[4853]: I1122 07:30:22.690610 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5"] Nov 22 07:30:22 crc kubenswrapper[4853]: W1122 07:30:22.698980 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d5ee3a_0ffe_4c7d_abe8_28a6d07aad64.slice/crio-be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54 WatchSource:0}: Error finding container be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54: Status 404 returned error can't find the container with id be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54 Nov 22 07:30:23 crc kubenswrapper[4853]: I1122 07:30:23.279837 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" event={"ID":"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64","Type":"ContainerStarted","Data":"be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54"} Nov 22 07:30:25 crc kubenswrapper[4853]: I1122 07:30:25.295013 4853 generic.go:334] "Generic (PLEG): container finished" podID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerID="2933e20691aee5445bbb8fdc4b111477ff5e27d046e6c705f6421568f3c93f33" exitCode=0 Nov 22 07:30:25 crc kubenswrapper[4853]: I1122 07:30:25.295078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" event={"ID":"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64","Type":"ContainerDied","Data":"2933e20691aee5445bbb8fdc4b111477ff5e27d046e6c705f6421568f3c93f33"} Nov 22 07:30:28 crc kubenswrapper[4853]: I1122 07:30:28.320095 4853 generic.go:334] "Generic (PLEG): container finished" podID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerID="a8d10c78eca0df9c2106b9b8990d70da7734173b44bfe39c499872cf590633b1" exitCode=0 Nov 22 07:30:28 crc kubenswrapper[4853]: I1122 07:30:28.320213 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" event={"ID":"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64","Type":"ContainerDied","Data":"a8d10c78eca0df9c2106b9b8990d70da7734173b44bfe39c499872cf590633b1"} Nov 22 07:30:29 crc kubenswrapper[4853]: I1122 07:30:29.335696 4853 generic.go:334] "Generic (PLEG): container finished" podID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerID="c903b83e5dbf74c51927d3ff49936b8e6979d6bcf8ce84f6fce21b27dff8d2fc" exitCode=0 Nov 22 07:30:29 crc kubenswrapper[4853]: I1122 07:30:29.336311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" event={"ID":"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64","Type":"ContainerDied","Data":"c903b83e5dbf74c51927d3ff49936b8e6979d6bcf8ce84f6fce21b27dff8d2fc"} Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.661808 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.707690 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle\") pod \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.708090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9cvb\" (UniqueName: \"kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb\") pod \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.708197 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util\") pod \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\" (UID: \"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64\") " Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.713515 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle" (OuterVolumeSpecName: "bundle") pod "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" (UID: "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.718470 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util" (OuterVolumeSpecName: "util") pod "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" (UID: "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.725176 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb" (OuterVolumeSpecName: "kube-api-access-h9cvb") pod "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" (UID: "73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64"). InnerVolumeSpecName "kube-api-access-h9cvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.810411 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.810854 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:30 crc kubenswrapper[4853]: I1122 07:30:30.810963 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9cvb\" (UniqueName: \"kubernetes.io/projected/73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64-kube-api-access-h9cvb\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:31 crc kubenswrapper[4853]: I1122 07:30:31.297902 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:30:31 crc kubenswrapper[4853]: I1122 07:30:31.298488 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:31 crc kubenswrapper[4853]: I1122 07:30:31.356250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" event={"ID":"73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64","Type":"ContainerDied","Data":"be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54"} Nov 22 07:30:31 crc kubenswrapper[4853]: I1122 07:30:31.356305 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be4b218cb69c4abaee90b32cc0e8b72f138ce8f17f435da6a7b940d4a5ef3b54" Nov 22 07:30:31 crc kubenswrapper[4853]: I1122 07:30:31.356370 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.789621 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-6xs5t"] Nov 22 07:30:33 crc kubenswrapper[4853]: E1122 07:30:33.791086 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="extract" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.791115 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="extract" Nov 22 07:30:33 crc kubenswrapper[4853]: E1122 07:30:33.791142 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="util" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.791151 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="util" Nov 22 07:30:33 crc kubenswrapper[4853]: E1122 07:30:33.791185 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="pull" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.791198 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="pull" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.792939 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64" containerName="extract" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.794281 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.797766 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.799027 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.799301 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-2d69n" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.811831 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-6xs5t"] Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.891391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9g78\" (UniqueName: \"kubernetes.io/projected/e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe-kube-api-access-w9g78\") pod \"nmstate-operator-557fdffb88-6xs5t\" (UID: \"e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" Nov 22 07:30:33 crc kubenswrapper[4853]: I1122 07:30:33.993483 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9g78\" (UniqueName: \"kubernetes.io/projected/e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe-kube-api-access-w9g78\") pod \"nmstate-operator-557fdffb88-6xs5t\" (UID: \"e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" Nov 22 07:30:34 crc kubenswrapper[4853]: I1122 07:30:34.020131 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9g78\" (UniqueName: \"kubernetes.io/projected/e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe-kube-api-access-w9g78\") pod \"nmstate-operator-557fdffb88-6xs5t\" (UID: \"e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" Nov 22 07:30:34 crc kubenswrapper[4853]: I1122 07:30:34.123338 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" Nov 22 07:30:34 crc kubenswrapper[4853]: I1122 07:30:34.460575 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-6xs5t"] Nov 22 07:30:35 crc kubenswrapper[4853]: I1122 07:30:35.395338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" event={"ID":"e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe","Type":"ContainerStarted","Data":"2ea4c613fa96cd364f265e42ed72c0ec0681feacdc9748d3791f92c8af739f4b"} Nov 22 07:30:47 crc kubenswrapper[4853]: I1122 07:30:47.518855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" event={"ID":"e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe","Type":"ContainerStarted","Data":"588f891853eb39abf5bf6135990af38784842fc1a287a4795c8224877065ae4d"} Nov 22 07:30:47 crc kubenswrapper[4853]: I1122 07:30:47.544554 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-6xs5t" podStartSLOduration=2.445978221 podStartE2EDuration="14.544533082s" podCreationTimestamp="2025-11-22 07:30:33 +0000 UTC" firstStartedPulling="2025-11-22 07:30:34.479841997 +0000 UTC m=+1233.320464623" lastFinishedPulling="2025-11-22 07:30:46.578396858 +0000 UTC m=+1245.419019484" observedRunningTime="2025-11-22 07:30:47.536507662 +0000 UTC m=+1246.377130308" watchObservedRunningTime="2025-11-22 07:30:47.544533082 +0000 UTC m=+1246.385155718" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.566218 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.567989 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.573394 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hg7v5" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.591728 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.611921 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.613255 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.622242 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.631289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.647000 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fm7n6"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.648076 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.710347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph9w\" (UniqueName: \"kubernetes.io/projected/5aed800e-6ddc-444e-b9d9-2440106297c3-kube-api-access-qph9w\") pod \"nmstate-metrics-5dcf9c57c5-tbl28\" (UID: \"5aed800e-6ddc-444e-b9d9-2440106297c3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.727040 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lpqd\" (UniqueName: \"kubernetes.io/projected/325ff591-591f-4b66-adbb-fdc7e20a553d-kube-api-access-8lpqd\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.728519 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.830529 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.830652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph9w\" (UniqueName: \"kubernetes.io/projected/5aed800e-6ddc-444e-b9d9-2440106297c3-kube-api-access-qph9w\") pod \"nmstate-metrics-5dcf9c57c5-tbl28\" (UID: \"5aed800e-6ddc-444e-b9d9-2440106297c3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.830907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llfd2\" (UniqueName: \"kubernetes.io/projected/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-kube-api-access-llfd2\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.830952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lpqd\" (UniqueName: \"kubernetes.io/projected/325ff591-591f-4b66-adbb-fdc7e20a553d-kube-api-access-8lpqd\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.831019 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-ovs-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.831080 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-nmstate-lock\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.831150 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-dbus-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: E1122 07:30:48.831376 4853 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 22 07:30:48 crc kubenswrapper[4853]: E1122 07:30:48.831509 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair podName:325ff591-591f-4b66-adbb-fdc7e20a553d nodeName:}" failed. No retries permitted until 2025-11-22 07:30:49.331476324 +0000 UTC m=+1248.172098950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair") pod "nmstate-webhook-6b89b748d8-h59d2" (UID: "325ff591-591f-4b66-adbb-fdc7e20a553d") : secret "openshift-nmstate-webhook" not found Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.894115 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lpqd\" (UniqueName: \"kubernetes.io/projected/325ff591-591f-4b66-adbb-fdc7e20a553d-kube-api-access-8lpqd\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.907214 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph9w\" (UniqueName: \"kubernetes.io/projected/5aed800e-6ddc-444e-b9d9-2440106297c3-kube-api-access-qph9w\") pod \"nmstate-metrics-5dcf9c57c5-tbl28\" (UID: \"5aed800e-6ddc-444e-b9d9-2440106297c3\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.934277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-dbus-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.934406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llfd2\" (UniqueName: \"kubernetes.io/projected/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-kube-api-access-llfd2\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.934495 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-ovs-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.934564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-nmstate-lock\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.936527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-dbus-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.937028 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-ovs-socket\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.937146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-nmstate-lock\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.959953 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq"] Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.960932 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.965736 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.965934 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-z95mn" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.966047 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 22 07:30:48 crc kubenswrapper[4853]: I1122 07:30:48.992275 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq"] Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.000288 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llfd2\" (UniqueName: \"kubernetes.io/projected/8c632bb5-f4e5-43b2-b6ce-6a7bede629f8-kube-api-access-llfd2\") pod \"nmstate-handler-fm7n6\" (UID: \"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8\") " pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.010352 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.139007 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cd806fb8-97b3-4c27-95d5-0366665151db-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.139326 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd806fb8-97b3-4c27-95d5-0366665151db-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.139470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkztr\" (UniqueName: \"kubernetes.io/projected/cd806fb8-97b3-4c27-95d5-0366665151db-kube-api-access-gkztr\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.188667 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.202393 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.203617 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.227816 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.241011 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cd806fb8-97b3-4c27-95d5-0366665151db-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.241131 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd806fb8-97b3-4c27-95d5-0366665151db-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.241197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkztr\" (UniqueName: \"kubernetes.io/projected/cd806fb8-97b3-4c27-95d5-0366665151db-kube-api-access-gkztr\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.242548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/cd806fb8-97b3-4c27-95d5-0366665151db-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.245594 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/cd806fb8-97b3-4c27-95d5-0366665151db-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.268703 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkztr\" (UniqueName: \"kubernetes.io/projected/cd806fb8-97b3-4c27-95d5-0366665151db-kube-api-access-gkztr\") pod \"nmstate-console-plugin-5874bd7bc5-ng6rq\" (UID: \"cd806fb8-97b3-4c27-95d5-0366665151db\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.300504 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6hxj\" (UniqueName: \"kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343486 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343524 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343599 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.343752 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.362675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/325ff591-591f-4b66-adbb-fdc7e20a553d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-h59d2\" (UID: \"325ff591-591f-4b66-adbb-fdc7e20a553d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.446137 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.446907 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.446978 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.447015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.447066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.447146 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.447192 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6hxj\" (UniqueName: \"kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.449548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.449851 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.451451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.456205 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.459636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.472010 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.476470 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6hxj\" (UniqueName: \"kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj\") pod \"console-c5f4cf575-47s4q\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.538258 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.560020 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fm7n6" event={"ID":"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8","Type":"ContainerStarted","Data":"13bdda401ee569bc0414fd7dc3b080cd1fb42d3614591104082ced7a92b9d62d"} Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.591445 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:49 crc kubenswrapper[4853]: I1122 07:30:49.597716 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28"] Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.015848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq"] Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.093180 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.156977 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2"] Nov 22 07:30:50 crc kubenswrapper[4853]: W1122 07:30:50.167105 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod325ff591_591f_4b66_adbb_fdc7e20a553d.slice/crio-fccee09c5443dc519d29794cbabb16b6c327b1be1f4695303acac80fc86c4975 WatchSource:0}: Error finding container fccee09c5443dc519d29794cbabb16b6c327b1be1f4695303acac80fc86c4975: Status 404 returned error can't find the container with id fccee09c5443dc519d29794cbabb16b6c327b1be1f4695303acac80fc86c4975 Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.571546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" event={"ID":"cd806fb8-97b3-4c27-95d5-0366665151db","Type":"ContainerStarted","Data":"04106fcc626453435119731a89063f99dec7ea17c1ae9492b7c37e6efcb0cf2e"} Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.573235 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" event={"ID":"5aed800e-6ddc-444e-b9d9-2440106297c3","Type":"ContainerStarted","Data":"5b3ae190eee6b18a2fe157e1b80865fc83f8f1f86ecfb71dc2160abdc567a60e"} Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.575935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5f4cf575-47s4q" event={"ID":"a0659bb8-90a4-4018-b1a5-64d307a50dcd","Type":"ContainerStarted","Data":"ce34071f8bfdc0c83adea546339db47ef1dc168ff80bdb05db3fb5acc9181e0a"} Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.576012 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5f4cf575-47s4q" event={"ID":"a0659bb8-90a4-4018-b1a5-64d307a50dcd","Type":"ContainerStarted","Data":"a62ba506cb39568c8942aa6f83a260f00e9432bb6ae351a0af99194f760212d5"} Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.578374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" event={"ID":"325ff591-591f-4b66-adbb-fdc7e20a553d","Type":"ContainerStarted","Data":"fccee09c5443dc519d29794cbabb16b6c327b1be1f4695303acac80fc86c4975"} Nov 22 07:30:50 crc kubenswrapper[4853]: I1122 07:30:50.601107 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-c5f4cf575-47s4q" podStartSLOduration=1.601077912 podStartE2EDuration="1.601077912s" podCreationTimestamp="2025-11-22 07:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:50.594623875 +0000 UTC m=+1249.435246501" watchObservedRunningTime="2025-11-22 07:30:50.601077912 +0000 UTC m=+1249.441700548" Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.632500 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fm7n6" event={"ID":"8c632bb5-f4e5-43b2-b6ce-6a7bede629f8","Type":"ContainerStarted","Data":"c52ead3d1683243097b8d4ac2ba1cfa171da8f129abb3df2ea922e54425caf66"} Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.633338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.634752 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" event={"ID":"5aed800e-6ddc-444e-b9d9-2440106297c3","Type":"ContainerStarted","Data":"9b99506962b42baf53a8d59ecfa8a72cbfc1cca6271aee81bd180ae497968fb7"} Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.638055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" event={"ID":"325ff591-591f-4b66-adbb-fdc7e20a553d","Type":"ContainerStarted","Data":"b7ecdcd05c63230677e0f911e81eca49b7c1628fb8997f82eb45343ada0427f5"} Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.638264 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.654008 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fm7n6" podStartSLOduration=2.166030595 podStartE2EDuration="7.653632893s" podCreationTimestamp="2025-11-22 07:30:48 +0000 UTC" firstStartedPulling="2025-11-22 07:30:49.065528747 +0000 UTC m=+1247.906151373" lastFinishedPulling="2025-11-22 07:30:54.553131045 +0000 UTC m=+1253.393753671" observedRunningTime="2025-11-22 07:30:55.651199646 +0000 UTC m=+1254.491822272" watchObservedRunningTime="2025-11-22 07:30:55.653632893 +0000 UTC m=+1254.494255519" Nov 22 07:30:55 crc kubenswrapper[4853]: I1122 07:30:55.682017 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" podStartSLOduration=3.296492015 podStartE2EDuration="7.681981482s" podCreationTimestamp="2025-11-22 07:30:48 +0000 UTC" firstStartedPulling="2025-11-22 07:30:50.169541521 +0000 UTC m=+1249.010164147" lastFinishedPulling="2025-11-22 07:30:54.555030988 +0000 UTC m=+1253.395653614" observedRunningTime="2025-11-22 07:30:55.672405649 +0000 UTC m=+1254.513028275" watchObservedRunningTime="2025-11-22 07:30:55.681981482 +0000 UTC m=+1254.522604108" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.047663 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fm7n6" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.592506 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.593232 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.598955 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.675674 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" event={"ID":"cd806fb8-97b3-4c27-95d5-0366665151db","Type":"ContainerStarted","Data":"e7dd35f83cb1c6acdceea87587dc5ad29449a9b62998e862e2d7967f68a08a2e"} Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.680878 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.696247 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-ng6rq" podStartSLOduration=2.9096751640000003 podStartE2EDuration="11.696229985s" podCreationTimestamp="2025-11-22 07:30:48 +0000 UTC" firstStartedPulling="2025-11-22 07:30:50.034311264 +0000 UTC m=+1248.874933890" lastFinishedPulling="2025-11-22 07:30:58.820866085 +0000 UTC m=+1257.661488711" observedRunningTime="2025-11-22 07:30:59.694479527 +0000 UTC m=+1258.535102173" watchObservedRunningTime="2025-11-22 07:30:59.696229985 +0000 UTC m=+1258.536852611" Nov 22 07:30:59 crc kubenswrapper[4853]: I1122 07:30:59.783606 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.297803 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.298443 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.298527 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.299703 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.299802 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e" gracePeriod=600 Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.697147 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e" exitCode=0 Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.697648 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e"} Nov 22 07:31:01 crc kubenswrapper[4853]: I1122 07:31:01.697786 4853 scope.go:117] "RemoveContainer" containerID="453b1ef38ab6b08bb125d45890335ad304d3ef7d9d0a68f91fb10cfac32c00e8" Nov 22 07:31:02 crc kubenswrapper[4853]: I1122 07:31:02.711338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7"} Nov 22 07:31:02 crc kubenswrapper[4853]: I1122 07:31:02.714299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" event={"ID":"5aed800e-6ddc-444e-b9d9-2440106297c3","Type":"ContainerStarted","Data":"70522e9c81ce0a5a60b70892e9793af2ebf312d11c16b86ba95bfe180ffc2ab2"} Nov 22 07:31:03 crc kubenswrapper[4853]: I1122 07:31:03.781116 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-tbl28" podStartSLOduration=3.045847356 podStartE2EDuration="15.781074258s" podCreationTimestamp="2025-11-22 07:30:48 +0000 UTC" firstStartedPulling="2025-11-22 07:30:49.653776335 +0000 UTC m=+1248.494398961" lastFinishedPulling="2025-11-22 07:31:02.389003227 +0000 UTC m=+1261.229625863" observedRunningTime="2025-11-22 07:31:03.7720408 +0000 UTC m=+1262.612663466" watchObservedRunningTime="2025-11-22 07:31:03.781074258 +0000 UTC m=+1262.621696924" Nov 22 07:31:09 crc kubenswrapper[4853]: I1122 07:31:09.546355 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-h59d2" Nov 22 07:31:24 crc kubenswrapper[4853]: I1122 07:31:24.845728 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-fd7cb74df-54pkh" podUID="770673d6-8086-419e-82fd-275359586fc8" containerName="console" containerID="cri-o://6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480" gracePeriod=15 Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.357278 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fd7cb74df-54pkh_770673d6-8086-419e-82fd-275359586fc8/console/0.log" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.358124 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.493838 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnsmn\" (UniqueName: \"kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.494281 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.494394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.494518 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.494686 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.494831 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.495075 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config\") pod \"770673d6-8086-419e-82fd-275359586fc8\" (UID: \"770673d6-8086-419e-82fd-275359586fc8\") " Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.498697 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca" (OuterVolumeSpecName: "service-ca") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.498720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.499137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.499762 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config" (OuterVolumeSpecName: "console-config") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.505895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.514986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn" (OuterVolumeSpecName: "kube-api-access-cnsmn") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "kube-api-access-cnsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.521027 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "770673d6-8086-419e-82fd-275359586fc8" (UID: "770673d6-8086-419e-82fd-275359586fc8"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597071 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597130 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnsmn\" (UniqueName: \"kubernetes.io/projected/770673d6-8086-419e-82fd-275359586fc8-kube-api-access-cnsmn\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597148 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597159 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597169 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597177 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/770673d6-8086-419e-82fd-275359586fc8-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.597185 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/770673d6-8086-419e-82fd-275359586fc8-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.933306 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-fd7cb74df-54pkh_770673d6-8086-419e-82fd-275359586fc8/console/0.log" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.933793 4853 generic.go:334] "Generic (PLEG): container finished" podID="770673d6-8086-419e-82fd-275359586fc8" containerID="6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480" exitCode=2 Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.933836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-54pkh" event={"ID":"770673d6-8086-419e-82fd-275359586fc8","Type":"ContainerDied","Data":"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480"} Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.933872 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fd7cb74df-54pkh" event={"ID":"770673d6-8086-419e-82fd-275359586fc8","Type":"ContainerDied","Data":"fe3d4ce43dad4080bcac442739372a6a224a4289ef219acaa37b68ca39755831"} Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.933896 4853 scope.go:117] "RemoveContainer" containerID="6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.934079 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fd7cb74df-54pkh" Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.965551 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:31:25 crc kubenswrapper[4853]: I1122 07:31:25.971327 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-fd7cb74df-54pkh"] Nov 22 07:31:26 crc kubenswrapper[4853]: I1122 07:31:26.027198 4853 scope.go:117] "RemoveContainer" containerID="6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480" Nov 22 07:31:26 crc kubenswrapper[4853]: E1122 07:31:26.027724 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480\": container with ID starting with 6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480 not found: ID does not exist" containerID="6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480" Nov 22 07:31:26 crc kubenswrapper[4853]: I1122 07:31:26.027780 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480"} err="failed to get container status \"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480\": rpc error: code = NotFound desc = could not find container \"6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480\": container with ID starting with 6510898f759c3dee620e420553b32941ae50f3647a9f47adc1766ba81216a480 not found: ID does not exist" Nov 22 07:31:27 crc kubenswrapper[4853]: I1122 07:31:27.760700 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770673d6-8086-419e-82fd-275359586fc8" path="/var/lib/kubelet/pods/770673d6-8086-419e-82fd-275359586fc8/volumes" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.705179 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x"] Nov 22 07:31:31 crc kubenswrapper[4853]: E1122 07:31:31.706129 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770673d6-8086-419e-82fd-275359586fc8" containerName="console" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.706144 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="770673d6-8086-419e-82fd-275359586fc8" containerName="console" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.706291 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="770673d6-8086-419e-82fd-275359586fc8" containerName="console" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.707361 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.710062 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.736573 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x"] Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.815986 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.816656 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2tw\" (UniqueName: \"kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.816909 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.918686 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.918742 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc2tw\" (UniqueName: \"kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.918813 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.919320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.919317 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:31 crc kubenswrapper[4853]: I1122 07:31:31.946690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc2tw\" (UniqueName: \"kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:32 crc kubenswrapper[4853]: I1122 07:31:32.048245 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:32 crc kubenswrapper[4853]: I1122 07:31:32.521333 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x"] Nov 22 07:31:32 crc kubenswrapper[4853]: I1122 07:31:32.998137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerStarted","Data":"99fab87fe80d26540805b4e01b9fc9f648a9fbf8070590ed24cd27423c583308"} Nov 22 07:31:33 crc kubenswrapper[4853]: I1122 07:31:33.000334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerStarted","Data":"20eef55d28aa0088b567e21d122ddb2b206cbee19a5b6a2b48de7ebf7a92a7fd"} Nov 22 07:31:34 crc kubenswrapper[4853]: I1122 07:31:34.010325 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerID="99fab87fe80d26540805b4e01b9fc9f648a9fbf8070590ed24cd27423c583308" exitCode=0 Nov 22 07:31:34 crc kubenswrapper[4853]: I1122 07:31:34.010441 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerDied","Data":"99fab87fe80d26540805b4e01b9fc9f648a9fbf8070590ed24cd27423c583308"} Nov 22 07:31:37 crc kubenswrapper[4853]: I1122 07:31:37.034824 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerID="57815b69d1ebbd70f3d04c8b6b2656aee8a073dbfdb50389d7ddb3eadadba22d" exitCode=0 Nov 22 07:31:37 crc kubenswrapper[4853]: I1122 07:31:37.034935 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerDied","Data":"57815b69d1ebbd70f3d04c8b6b2656aee8a073dbfdb50389d7ddb3eadadba22d"} Nov 22 07:31:38 crc kubenswrapper[4853]: I1122 07:31:38.046457 4853 generic.go:334] "Generic (PLEG): container finished" podID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerID="01f72c61420c71ba78e939052ac786210c06a9fc2eb582670531cb150abdf8be" exitCode=0 Nov 22 07:31:38 crc kubenswrapper[4853]: I1122 07:31:38.046587 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerDied","Data":"01f72c61420c71ba78e939052ac786210c06a9fc2eb582670531cb150abdf8be"} Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.347912 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.472824 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc2tw\" (UniqueName: \"kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw\") pod \"3f7e0026-3c37-470d-b2b7-cf742c742854\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.472964 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util\") pod \"3f7e0026-3c37-470d-b2b7-cf742c742854\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.473103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle\") pod \"3f7e0026-3c37-470d-b2b7-cf742c742854\" (UID: \"3f7e0026-3c37-470d-b2b7-cf742c742854\") " Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.474107 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle" (OuterVolumeSpecName: "bundle") pod "3f7e0026-3c37-470d-b2b7-cf742c742854" (UID: "3f7e0026-3c37-470d-b2b7-cf742c742854"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.478488 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw" (OuterVolumeSpecName: "kube-api-access-dc2tw") pod "3f7e0026-3c37-470d-b2b7-cf742c742854" (UID: "3f7e0026-3c37-470d-b2b7-cf742c742854"). InnerVolumeSpecName "kube-api-access-dc2tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.486799 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util" (OuterVolumeSpecName: "util") pod "3f7e0026-3c37-470d-b2b7-cf742c742854" (UID: "3f7e0026-3c37-470d-b2b7-cf742c742854"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.575358 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.575654 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f7e0026-3c37-470d-b2b7-cf742c742854-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:39 crc kubenswrapper[4853]: I1122 07:31:39.575713 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc2tw\" (UniqueName: \"kubernetes.io/projected/3f7e0026-3c37-470d-b2b7-cf742c742854-kube-api-access-dc2tw\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:40 crc kubenswrapper[4853]: I1122 07:31:40.065626 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" event={"ID":"3f7e0026-3c37-470d-b2b7-cf742c742854","Type":"ContainerDied","Data":"20eef55d28aa0088b567e21d122ddb2b206cbee19a5b6a2b48de7ebf7a92a7fd"} Nov 22 07:31:40 crc kubenswrapper[4853]: I1122 07:31:40.065675 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20eef55d28aa0088b567e21d122ddb2b206cbee19a5b6a2b48de7ebf7a92a7fd" Nov 22 07:31:40 crc kubenswrapper[4853]: I1122 07:31:40.065681 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.497095 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd"] Nov 22 07:31:50 crc kubenswrapper[4853]: E1122 07:31:50.497827 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="util" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.497838 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="util" Nov 22 07:31:50 crc kubenswrapper[4853]: E1122 07:31:50.497854 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="pull" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.497860 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="pull" Nov 22 07:31:50 crc kubenswrapper[4853]: E1122 07:31:50.497876 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="extract" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.497882 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="extract" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.498025 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7e0026-3c37-470d-b2b7-cf742c742854" containerName="extract" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.498576 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.501441 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.501598 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qwglz" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.501640 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.504116 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.504281 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.521371 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd"] Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.602013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-webhook-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.602100 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbf8\" (UniqueName: \"kubernetes.io/projected/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-kube-api-access-8gbf8\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.602147 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-apiservice-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.703434 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-webhook-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.703524 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gbf8\" (UniqueName: \"kubernetes.io/projected/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-kube-api-access-8gbf8\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.703583 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-apiservice-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.712137 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-apiservice-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.718602 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-webhook-cert\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.730289 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gbf8\" (UniqueName: \"kubernetes.io/projected/b7cfa3a7-05d9-4822-9fda-8316c75ee9a4-kube-api-access-8gbf8\") pod \"metallb-operator-controller-manager-559f7d85b8-xtjfd\" (UID: \"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4\") " pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.823189 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.844482 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69"] Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.845955 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.858273 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.858367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.858447 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tnf99" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.867164 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69"] Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.908578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.908673 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-webhook-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:50 crc kubenswrapper[4853]: I1122 07:31:50.908703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nlcj\" (UniqueName: \"kubernetes.io/projected/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-kube-api-access-4nlcj\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.010662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.010724 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-webhook-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.010783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nlcj\" (UniqueName: \"kubernetes.io/projected/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-kube-api-access-4nlcj\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.022661 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-webhook-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.022531 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.036656 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nlcj\" (UniqueName: \"kubernetes.io/projected/03b1c43b-94f9-4df8-9e17-12b1fbc5a544-kube-api-access-4nlcj\") pod \"metallb-operator-webhook-server-6c89cb79d4-kkj69\" (UID: \"03b1c43b-94f9-4df8-9e17-12b1fbc5a544\") " pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.171936 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd"] Nov 22 07:31:51 crc kubenswrapper[4853]: W1122 07:31:51.176160 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7cfa3a7_05d9_4822_9fda_8316c75ee9a4.slice/crio-1f5854348f3107ce0927e765e4c6733efd94f97049c83f8bbe5106fda0ea88f9 WatchSource:0}: Error finding container 1f5854348f3107ce0927e765e4c6733efd94f97049c83f8bbe5106fda0ea88f9: Status 404 returned error can't find the container with id 1f5854348f3107ce0927e765e4c6733efd94f97049c83f8bbe5106fda0ea88f9 Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.238834 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:31:51 crc kubenswrapper[4853]: I1122 07:31:51.720285 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69"] Nov 22 07:31:52 crc kubenswrapper[4853]: I1122 07:31:52.159314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" event={"ID":"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4","Type":"ContainerStarted","Data":"1f5854348f3107ce0927e765e4c6733efd94f97049c83f8bbe5106fda0ea88f9"} Nov 22 07:31:52 crc kubenswrapper[4853]: I1122 07:31:52.162288 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" event={"ID":"03b1c43b-94f9-4df8-9e17-12b1fbc5a544","Type":"ContainerStarted","Data":"89cf8c3a804caf86323bc82c750dc6c1cfbe9c9765b9cbb53e7286b9502b64f5"} Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.241447 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" event={"ID":"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4","Type":"ContainerStarted","Data":"8b72c9339106c5b3a5bc464e597d60e4d67cc93a99edeb25176c4b2bf7c2c646"} Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.241736 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.244899 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" event={"ID":"03b1c43b-94f9-4df8-9e17-12b1fbc5a544","Type":"ContainerStarted","Data":"3f32fb98c7c23a8b1ea2f6511405f629f955235ba37d8bae0609de23073cd3c9"} Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.245552 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.269600 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" podStartSLOduration=2.555042444 podStartE2EDuration="10.269580782s" podCreationTimestamp="2025-11-22 07:31:50 +0000 UTC" firstStartedPulling="2025-11-22 07:31:51.182050736 +0000 UTC m=+1310.022673352" lastFinishedPulling="2025-11-22 07:31:58.896589064 +0000 UTC m=+1317.737211690" observedRunningTime="2025-11-22 07:32:00.267788693 +0000 UTC m=+1319.108411329" watchObservedRunningTime="2025-11-22 07:32:00.269580782 +0000 UTC m=+1319.110203408" Nov 22 07:32:00 crc kubenswrapper[4853]: I1122 07:32:00.297990 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" podStartSLOduration=3.115396779 podStartE2EDuration="10.297968935s" podCreationTimestamp="2025-11-22 07:31:50 +0000 UTC" firstStartedPulling="2025-11-22 07:31:51.734561607 +0000 UTC m=+1310.575184233" lastFinishedPulling="2025-11-22 07:31:58.917133763 +0000 UTC m=+1317.757756389" observedRunningTime="2025-11-22 07:32:00.295030765 +0000 UTC m=+1319.135653381" watchObservedRunningTime="2025-11-22 07:32:00.297968935 +0000 UTC m=+1319.138591561" Nov 22 07:32:11 crc kubenswrapper[4853]: I1122 07:32:11.266692 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c89cb79d4-kkj69" Nov 22 07:32:30 crc kubenswrapper[4853]: I1122 07:32:30.827196 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.567585 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bt2b5"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.570860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.572951 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574509 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6mc\" (UniqueName: \"kubernetes.io/projected/a5bf1d2e-4694-4ec6-a2de-e35821a73625-kube-api-access-ht6mc\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-sockets\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574568 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-reloader\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574594 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-startup\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574617 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics-certs\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-conf\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.574827 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.578096 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-zjxmv" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.578182 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.579526 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.580213 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.580406 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.581945 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.659183 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w8tbx"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.660649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.666503 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.667018 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.667245 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-phkhw" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.667732 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675634 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-sockets\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675673 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-reloader\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj99w\" (UniqueName: \"kubernetes.io/projected/f1c2a1c6-4546-4933-8569-ca5e7180cd85-kube-api-access-zj99w\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675737 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-startup\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675788 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47m7j\" (UniqueName: \"kubernetes.io/projected/a0e4024e-8048-4b57-becd-2866e3409a4b-kube-api-access-47m7j\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675811 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics-certs\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675826 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-metrics-certs\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675949 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-conf\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.675973 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1c2a1c6-4546-4933-8569-ca5e7180cd85-cert\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.676052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.676090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0e4024e-8048-4b57-becd-2866e3409a4b-metallb-excludel2\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.676111 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6mc\" (UniqueName: \"kubernetes.io/projected/a5bf1d2e-4694-4ec6-a2de-e35821a73625-kube-api-access-ht6mc\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.677024 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-sockets\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.677148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-conf\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.677256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.677821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/a5bf1d2e-4694-4ec6-a2de-e35821a73625-reloader\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.679292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/a5bf1d2e-4694-4ec6-a2de-e35821a73625-frr-startup\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.686393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5bf1d2e-4694-4ec6-a2de-e35821a73625-metrics-certs\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.686825 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-lnt7f"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.691581 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.694372 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.709870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6mc\" (UniqueName: \"kubernetes.io/projected/a5bf1d2e-4694-4ec6-a2de-e35821a73625-kube-api-access-ht6mc\") pod \"frr-k8s-bt2b5\" (UID: \"a5bf1d2e-4694-4ec6-a2de-e35821a73625\") " pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.720521 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-lnt7f"] Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.778971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1c2a1c6-4546-4933-8569-ca5e7180cd85-cert\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.779093 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0e4024e-8048-4b57-becd-2866e3409a4b-metallb-excludel2\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.779133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj99w\" (UniqueName: \"kubernetes.io/projected/f1c2a1c6-4546-4933-8569-ca5e7180cd85-kube-api-access-zj99w\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.779172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47m7j\" (UniqueName: \"kubernetes.io/projected/a0e4024e-8048-4b57-becd-2866e3409a4b-kube-api-access-47m7j\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.779195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-metrics-certs\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.779283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: E1122 07:32:31.779419 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:32:31 crc kubenswrapper[4853]: E1122 07:32:31.779474 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist podName:a0e4024e-8048-4b57-becd-2866e3409a4b nodeName:}" failed. No retries permitted until 2025-11-22 07:32:32.279460086 +0000 UTC m=+1351.120082712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist") pod "speaker-w8tbx" (UID: "a0e4024e-8048-4b57-becd-2866e3409a4b") : secret "metallb-memberlist" not found Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.782033 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a0e4024e-8048-4b57-becd-2866e3409a4b-metallb-excludel2\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.785336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-metrics-certs\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.805206 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1c2a1c6-4546-4933-8569-ca5e7180cd85-cert\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.806063 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47m7j\" (UniqueName: \"kubernetes.io/projected/a0e4024e-8048-4b57-becd-2866e3409a4b-kube-api-access-47m7j\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.816808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj99w\" (UniqueName: \"kubernetes.io/projected/f1c2a1c6-4546-4933-8569-ca5e7180cd85-kube-api-access-zj99w\") pod \"frr-k8s-webhook-server-6998585d5-n8cs5\" (UID: \"f1c2a1c6-4546-4933-8569-ca5e7180cd85\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.881217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-cert\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.881288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zw6\" (UniqueName: \"kubernetes.io/projected/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-kube-api-access-p8zw6\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.881331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-metrics-certs\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.896381 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.906188 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.982638 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-cert\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.982708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zw6\" (UniqueName: \"kubernetes.io/projected/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-kube-api-access-p8zw6\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.982742 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-metrics-certs\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.986068 4853 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:32:31 crc kubenswrapper[4853]: I1122 07:32:31.986342 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-metrics-certs\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.000798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-cert\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.003108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zw6\" (UniqueName: \"kubernetes.io/projected/5e6933c1-fd3f-45a0-819f-1794ed7fc6b4-kube-api-access-p8zw6\") pod \"controller-6c7b4b5f48-lnt7f\" (UID: \"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4\") " pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.062093 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.289263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:32 crc kubenswrapper[4853]: E1122 07:32:32.289465 4853 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:32:32 crc kubenswrapper[4853]: E1122 07:32:32.289923 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist podName:a0e4024e-8048-4b57-becd-2866e3409a4b nodeName:}" failed. No retries permitted until 2025-11-22 07:32:33.289900742 +0000 UTC m=+1352.130523368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist") pod "speaker-w8tbx" (UID: "a0e4024e-8048-4b57-becd-2866e3409a4b") : secret "metallb-memberlist" not found Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.470233 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5"] Nov 22 07:32:32 crc kubenswrapper[4853]: W1122 07:32:32.478690 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1c2a1c6_4546_4933_8569_ca5e7180cd85.slice/crio-05a644c05fc0ec785ec8a28189a8d60c4a034b5b331665ca985d3c64e828559c WatchSource:0}: Error finding container 05a644c05fc0ec785ec8a28189a8d60c4a034b5b331665ca985d3c64e828559c: Status 404 returned error can't find the container with id 05a644c05fc0ec785ec8a28189a8d60c4a034b5b331665ca985d3c64e828559c Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.496848 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" event={"ID":"f1c2a1c6-4546-4933-8569-ca5e7180cd85","Type":"ContainerStarted","Data":"05a644c05fc0ec785ec8a28189a8d60c4a034b5b331665ca985d3c64e828559c"} Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.498498 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"23225b07c6e28a8090bb9d63849fcd3f669efee6793716b546fbaa94167eae32"} Nov 22 07:32:32 crc kubenswrapper[4853]: I1122 07:32:32.545391 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-lnt7f"] Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.309937 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.320699 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a0e4024e-8048-4b57-becd-2866e3409a4b-memberlist\") pod \"speaker-w8tbx\" (UID: \"a0e4024e-8048-4b57-becd-2866e3409a4b\") " pod="metallb-system/speaker-w8tbx" Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.485442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w8tbx" Nov 22 07:32:33 crc kubenswrapper[4853]: W1122 07:32:33.505785 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e4024e_8048_4b57_becd_2866e3409a4b.slice/crio-961482f362c03878c843964e0f40b6493166c93f99fa07a1be3126bde131c3e1 WatchSource:0}: Error finding container 961482f362c03878c843964e0f40b6493166c93f99fa07a1be3126bde131c3e1: Status 404 returned error can't find the container with id 961482f362c03878c843964e0f40b6493166c93f99fa07a1be3126bde131c3e1 Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.506398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-lnt7f" event={"ID":"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4","Type":"ContainerStarted","Data":"43432a1037119b51cf1cd298441f5af2b3b6fcab53171e5b13beee1ac2a5a137"} Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.508617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-lnt7f" event={"ID":"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4","Type":"ContainerStarted","Data":"7a1704efc6991a4a98004233ee4054d98ac7861bc9b0b5a07de6bd9ddba7444e"} Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.508918 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-lnt7f" event={"ID":"5e6933c1-fd3f-45a0-819f-1794ed7fc6b4","Type":"ContainerStarted","Data":"41ae5ac081aae0b078553cdcda7bddddce1673577da4371c885574f0a2a63ff6"} Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.509118 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:33 crc kubenswrapper[4853]: I1122 07:32:33.544950 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-lnt7f" podStartSLOduration=2.544923138 podStartE2EDuration="2.544923138s" podCreationTimestamp="2025-11-22 07:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:33.533434556 +0000 UTC m=+1352.374057212" watchObservedRunningTime="2025-11-22 07:32:33.544923138 +0000 UTC m=+1352.385545804" Nov 22 07:32:34 crc kubenswrapper[4853]: I1122 07:32:34.522849 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w8tbx" event={"ID":"a0e4024e-8048-4b57-becd-2866e3409a4b","Type":"ContainerStarted","Data":"745e33c12b75852fb4ba0812d2b599ca6f153a2ae191a5bf28f3b386c54e8ce2"} Nov 22 07:32:34 crc kubenswrapper[4853]: I1122 07:32:34.523392 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w8tbx" event={"ID":"a0e4024e-8048-4b57-becd-2866e3409a4b","Type":"ContainerStarted","Data":"961482f362c03878c843964e0f40b6493166c93f99fa07a1be3126bde131c3e1"} Nov 22 07:32:35 crc kubenswrapper[4853]: I1122 07:32:35.557804 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w8tbx" event={"ID":"a0e4024e-8048-4b57-becd-2866e3409a4b","Type":"ContainerStarted","Data":"00d55341605daa3cc5b76ff9ebbb5e7454f0b391d61cf8773bce9e3b25950381"} Nov 22 07:32:35 crc kubenswrapper[4853]: I1122 07:32:35.558186 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w8tbx" Nov 22 07:32:35 crc kubenswrapper[4853]: I1122 07:32:35.586543 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w8tbx" podStartSLOduration=4.586521229 podStartE2EDuration="4.586521229s" podCreationTimestamp="2025-11-22 07:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:35.58219612 +0000 UTC m=+1354.422818756" watchObservedRunningTime="2025-11-22 07:32:35.586521229 +0000 UTC m=+1354.427143855" Nov 22 07:32:42 crc kubenswrapper[4853]: I1122 07:32:42.067512 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-lnt7f" Nov 22 07:32:43 crc kubenswrapper[4853]: I1122 07:32:43.499672 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w8tbx" Nov 22 07:32:43 crc kubenswrapper[4853]: I1122 07:32:43.620664 4853 generic.go:334] "Generic (PLEG): container finished" podID="a5bf1d2e-4694-4ec6-a2de-e35821a73625" containerID="108450ef83bc7056156c4f8b6223a528b0341137379a74727f41088dcd8a5cc4" exitCode=0 Nov 22 07:32:43 crc kubenswrapper[4853]: I1122 07:32:43.620736 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerDied","Data":"108450ef83bc7056156c4f8b6223a528b0341137379a74727f41088dcd8a5cc4"} Nov 22 07:32:43 crc kubenswrapper[4853]: I1122 07:32:43.622900 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" event={"ID":"f1c2a1c6-4546-4933-8569-ca5e7180cd85","Type":"ContainerStarted","Data":"df474f5b3e3ff6f5eab20ca9102f2576480ece4f06b58e08b86a641fd435127c"} Nov 22 07:32:43 crc kubenswrapper[4853]: I1122 07:32:43.623254 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:32:44 crc kubenswrapper[4853]: I1122 07:32:44.636917 4853 generic.go:334] "Generic (PLEG): container finished" podID="a5bf1d2e-4694-4ec6-a2de-e35821a73625" containerID="a447cd31b7c8f49f6080dba566781c13de480443faf15f35fda77926ee373d51" exitCode=0 Nov 22 07:32:44 crc kubenswrapper[4853]: I1122 07:32:44.637010 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerDied","Data":"a447cd31b7c8f49f6080dba566781c13de480443faf15f35fda77926ee373d51"} Nov 22 07:32:44 crc kubenswrapper[4853]: I1122 07:32:44.666507 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" podStartSLOduration=3.08428631 podStartE2EDuration="13.666468015s" podCreationTimestamp="2025-11-22 07:32:31 +0000 UTC" firstStartedPulling="2025-11-22 07:32:32.481098868 +0000 UTC m=+1351.321721494" lastFinishedPulling="2025-11-22 07:32:43.063280573 +0000 UTC m=+1361.903903199" observedRunningTime="2025-11-22 07:32:43.702333278 +0000 UTC m=+1362.542955924" watchObservedRunningTime="2025-11-22 07:32:44.666468015 +0000 UTC m=+1363.507090641" Nov 22 07:32:45 crc kubenswrapper[4853]: I1122 07:32:45.649261 4853 generic.go:334] "Generic (PLEG): container finished" podID="a5bf1d2e-4694-4ec6-a2de-e35821a73625" containerID="705095e5e13330e6f64f85285607c3c15015eb0c11fdfde685b20f883292a24f" exitCode=0 Nov 22 07:32:45 crc kubenswrapper[4853]: I1122 07:32:45.649351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerDied","Data":"705095e5e13330e6f64f85285607c3c15015eb0c11fdfde685b20f883292a24f"} Nov 22 07:32:46 crc kubenswrapper[4853]: I1122 07:32:46.661568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"35a171119981ed9ef3dbb814ba5d4527edc45330874e57e5c74e56797cd0d4c9"} Nov 22 07:32:46 crc kubenswrapper[4853]: I1122 07:32:46.662009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"6dd4242288090cc3f4357f25e7087af57a6752369fd3a9311a5602d8cca2a3e8"} Nov 22 07:32:46 crc kubenswrapper[4853]: I1122 07:32:46.662026 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"3e34c442434a34470e950a85c43dc9c5b5c52c65b6d842e8bf7dc5317b018fa6"} Nov 22 07:32:47 crc kubenswrapper[4853]: I1122 07:32:47.677715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"1f6be27720baf371e7a36e2d709ce640d5063ba62bb392bb314e86581eb847af"} Nov 22 07:32:47 crc kubenswrapper[4853]: I1122 07:32:47.678314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"e2fc6e792a8bebc696ef040239749ee6eff6ee472013d9906d0e97af2b059d27"} Nov 22 07:32:47 crc kubenswrapper[4853]: I1122 07:32:47.678329 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bt2b5" event={"ID":"a5bf1d2e-4694-4ec6-a2de-e35821a73625","Type":"ContainerStarted","Data":"7fcf30d8521d7a370c5e8978f8444202d4fab3b710bd869f37714846f7179562"} Nov 22 07:32:47 crc kubenswrapper[4853]: I1122 07:32:47.678372 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:47 crc kubenswrapper[4853]: I1122 07:32:47.714611 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bt2b5" podStartSLOduration=5.853978891 podStartE2EDuration="16.714568236s" podCreationTimestamp="2025-11-22 07:32:31 +0000 UTC" firstStartedPulling="2025-11-22 07:32:32.241730891 +0000 UTC m=+1351.082353517" lastFinishedPulling="2025-11-22 07:32:43.102320236 +0000 UTC m=+1361.942942862" observedRunningTime="2025-11-22 07:32:47.709199351 +0000 UTC m=+1366.549821977" watchObservedRunningTime="2025-11-22 07:32:47.714568236 +0000 UTC m=+1366.555190872" Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.981203 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5zz9t"] Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.983036 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.985939 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.986832 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.991001 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5zz9t"] Nov 22 07:32:48 crc kubenswrapper[4853]: I1122 07:32:48.991936 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jptr5" Nov 22 07:32:49 crc kubenswrapper[4853]: I1122 07:32:49.124941 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8542n\" (UniqueName: \"kubernetes.io/projected/6f38e035-c5c5-49a3-a3a3-b592747e7948-kube-api-access-8542n\") pod \"openstack-operator-index-5zz9t\" (UID: \"6f38e035-c5c5-49a3-a3a3-b592747e7948\") " pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:49 crc kubenswrapper[4853]: I1122 07:32:49.227699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8542n\" (UniqueName: \"kubernetes.io/projected/6f38e035-c5c5-49a3-a3a3-b592747e7948-kube-api-access-8542n\") pod \"openstack-operator-index-5zz9t\" (UID: \"6f38e035-c5c5-49a3-a3a3-b592747e7948\") " pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:49 crc kubenswrapper[4853]: I1122 07:32:49.270061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8542n\" (UniqueName: \"kubernetes.io/projected/6f38e035-c5c5-49a3-a3a3-b592747e7948-kube-api-access-8542n\") pod \"openstack-operator-index-5zz9t\" (UID: \"6f38e035-c5c5-49a3-a3a3-b592747e7948\") " pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:49 crc kubenswrapper[4853]: I1122 07:32:49.308776 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:49 crc kubenswrapper[4853]: I1122 07:32:49.789183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5zz9t"] Nov 22 07:32:49 crc kubenswrapper[4853]: W1122 07:32:49.799159 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f38e035_c5c5_49a3_a3a3_b592747e7948.slice/crio-6ee55d488137ab0aacc75676786ec3f7556cf86b50774b4c7c8f90b7ade817c0 WatchSource:0}: Error finding container 6ee55d488137ab0aacc75676786ec3f7556cf86b50774b4c7c8f90b7ade817c0: Status 404 returned error can't find the container with id 6ee55d488137ab0aacc75676786ec3f7556cf86b50774b4c7c8f90b7ade817c0 Nov 22 07:32:50 crc kubenswrapper[4853]: I1122 07:32:50.706212 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5zz9t" event={"ID":"6f38e035-c5c5-49a3-a3a3-b592747e7948","Type":"ContainerStarted","Data":"6ee55d488137ab0aacc75676786ec3f7556cf86b50774b4c7c8f90b7ade817c0"} Nov 22 07:32:51 crc kubenswrapper[4853]: I1122 07:32:51.896769 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:51 crc kubenswrapper[4853]: I1122 07:32:51.946707 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:32:56 crc kubenswrapper[4853]: I1122 07:32:56.775431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5zz9t" event={"ID":"6f38e035-c5c5-49a3-a3a3-b592747e7948","Type":"ContainerStarted","Data":"2f6348cfca83c4b49c9b3129db4afae6bb2ab4af8645b4e0dc23f58abf34f121"} Nov 22 07:32:56 crc kubenswrapper[4853]: I1122 07:32:56.802170 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5zz9t" podStartSLOduration=2.391720647 podStartE2EDuration="8.802143199s" podCreationTimestamp="2025-11-22 07:32:48 +0000 UTC" firstStartedPulling="2025-11-22 07:32:49.802008334 +0000 UTC m=+1368.642630950" lastFinishedPulling="2025-11-22 07:32:56.212430876 +0000 UTC m=+1375.053053502" observedRunningTime="2025-11-22 07:32:56.79362335 +0000 UTC m=+1375.634245996" watchObservedRunningTime="2025-11-22 07:32:56.802143199 +0000 UTC m=+1375.642765825" Nov 22 07:32:59 crc kubenswrapper[4853]: I1122 07:32:59.309666 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:59 crc kubenswrapper[4853]: I1122 07:32:59.310171 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:32:59 crc kubenswrapper[4853]: I1122 07:32:59.344045 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:33:01 crc kubenswrapper[4853]: I1122 07:33:01.903508 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bt2b5" Nov 22 07:33:01 crc kubenswrapper[4853]: I1122 07:33:01.913150 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-n8cs5" Nov 22 07:33:09 crc kubenswrapper[4853]: I1122 07:33:09.342966 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5zz9t" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.689729 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh"] Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.693539 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.695722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.696003 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.696129 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twwg9\" (UniqueName: \"kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.696429 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-5vc4x" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.702781 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh"] Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.797919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.798011 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twwg9\" (UniqueName: \"kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.798535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.799444 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.799491 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:19 crc kubenswrapper[4853]: I1122 07:33:19.824535 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twwg9\" (UniqueName: \"kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9\") pod \"973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:20 crc kubenswrapper[4853]: I1122 07:33:20.023153 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:20 crc kubenswrapper[4853]: I1122 07:33:20.498053 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh"] Nov 22 07:33:21 crc kubenswrapper[4853]: I1122 07:33:21.013405 4853 generic.go:334] "Generic (PLEG): container finished" podID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerID="08fe2110424056521de2473c5015109b8095d47020a0cff253b1c196cab2b636" exitCode=0 Nov 22 07:33:21 crc kubenswrapper[4853]: I1122 07:33:21.013472 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" event={"ID":"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a","Type":"ContainerDied","Data":"08fe2110424056521de2473c5015109b8095d47020a0cff253b1c196cab2b636"} Nov 22 07:33:21 crc kubenswrapper[4853]: I1122 07:33:21.013516 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" event={"ID":"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a","Type":"ContainerStarted","Data":"ba8861e7e5431ab057bf0ab41c96f98bbabbe4ee66178dd1a841369843c55f94"} Nov 22 07:33:22 crc kubenswrapper[4853]: I1122 07:33:22.028801 4853 generic.go:334] "Generic (PLEG): container finished" podID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerID="68345b968205fda36e62bc80316e746d06d42ef60eec8149aa193b669432be4f" exitCode=0 Nov 22 07:33:22 crc kubenswrapper[4853]: I1122 07:33:22.028914 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" event={"ID":"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a","Type":"ContainerDied","Data":"68345b968205fda36e62bc80316e746d06d42ef60eec8149aa193b669432be4f"} Nov 22 07:33:23 crc kubenswrapper[4853]: I1122 07:33:23.041495 4853 generic.go:334] "Generic (PLEG): container finished" podID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerID="893029d40b2d8acf290981004282ca10a05e16ac157c5c9e8796efacbdd3c9fa" exitCode=0 Nov 22 07:33:23 crc kubenswrapper[4853]: I1122 07:33:23.041725 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" event={"ID":"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a","Type":"ContainerDied","Data":"893029d40b2d8acf290981004282ca10a05e16ac157c5c9e8796efacbdd3c9fa"} Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.401493 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.410114 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle\") pod \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.410343 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util\") pod \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.410385 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twwg9\" (UniqueName: \"kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9\") pod \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\" (UID: \"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a\") " Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.411844 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle" (OuterVolumeSpecName: "bundle") pod "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" (UID: "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.419957 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9" (OuterVolumeSpecName: "kube-api-access-twwg9") pod "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" (UID: "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a"). InnerVolumeSpecName "kube-api-access-twwg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.434674 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util" (OuterVolumeSpecName: "util") pod "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" (UID: "c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.513738 4853 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.513833 4853 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:24 crc kubenswrapper[4853]: I1122 07:33:24.513847 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twwg9\" (UniqueName: \"kubernetes.io/projected/c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a-kube-api-access-twwg9\") on node \"crc\" DevicePath \"\"" Nov 22 07:33:25 crc kubenswrapper[4853]: I1122 07:33:25.060560 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" event={"ID":"c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a","Type":"ContainerDied","Data":"ba8861e7e5431ab057bf0ab41c96f98bbabbe4ee66178dd1a841369843c55f94"} Nov 22 07:33:25 crc kubenswrapper[4853]: I1122 07:33:25.060623 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba8861e7e5431ab057bf0ab41c96f98bbabbe4ee66178dd1a841369843c55f94" Nov 22 07:33:25 crc kubenswrapper[4853]: I1122 07:33:25.061150 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh" Nov 22 07:33:31 crc kubenswrapper[4853]: I1122 07:33:31.297996 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:33:31 crc kubenswrapper[4853]: I1122 07:33:31.298360 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.469566 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc"] Nov 22 07:33:32 crc kubenswrapper[4853]: E1122 07:33:32.470637 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="pull" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.470659 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="pull" Nov 22 07:33:32 crc kubenswrapper[4853]: E1122 07:33:32.470695 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="util" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.470703 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="util" Nov 22 07:33:32 crc kubenswrapper[4853]: E1122 07:33:32.470735 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="extract" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.470769 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="extract" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.470943 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a" containerName="extract" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.472261 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.475306 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-wkzk7" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.550507 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc"] Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.677235 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2c57\" (UniqueName: \"kubernetes.io/projected/f0d8fc3e-45fa-4672-8641-d88a56c44708-kube-api-access-g2c57\") pod \"openstack-operator-controller-operator-5b84778f4-fdshc\" (UID: \"f0d8fc3e-45fa-4672-8641-d88a56c44708\") " pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.779282 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2c57\" (UniqueName: \"kubernetes.io/projected/f0d8fc3e-45fa-4672-8641-d88a56c44708-kube-api-access-g2c57\") pod \"openstack-operator-controller-operator-5b84778f4-fdshc\" (UID: \"f0d8fc3e-45fa-4672-8641-d88a56c44708\") " pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:32 crc kubenswrapper[4853]: I1122 07:33:32.805477 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2c57\" (UniqueName: \"kubernetes.io/projected/f0d8fc3e-45fa-4672-8641-d88a56c44708-kube-api-access-g2c57\") pod \"openstack-operator-controller-operator-5b84778f4-fdshc\" (UID: \"f0d8fc3e-45fa-4672-8641-d88a56c44708\") " pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:33 crc kubenswrapper[4853]: I1122 07:33:33.101335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:33 crc kubenswrapper[4853]: I1122 07:33:33.848775 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc"] Nov 22 07:33:33 crc kubenswrapper[4853]: W1122 07:33:33.852926 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0d8fc3e_45fa_4672_8641_d88a56c44708.slice/crio-c4f9c38813ea88bd778d2f096865bfe6b739d128e4ac63c25c707b7ae2654c97 WatchSource:0}: Error finding container c4f9c38813ea88bd778d2f096865bfe6b739d128e4ac63c25c707b7ae2654c97: Status 404 returned error can't find the container with id c4f9c38813ea88bd778d2f096865bfe6b739d128e4ac63c25c707b7ae2654c97 Nov 22 07:33:34 crc kubenswrapper[4853]: I1122 07:33:34.162468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" event={"ID":"f0d8fc3e-45fa-4672-8641-d88a56c44708","Type":"ContainerStarted","Data":"c4f9c38813ea88bd778d2f096865bfe6b739d128e4ac63c25c707b7ae2654c97"} Nov 22 07:33:40 crc kubenswrapper[4853]: I1122 07:33:40.235416 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" event={"ID":"f0d8fc3e-45fa-4672-8641-d88a56c44708","Type":"ContainerStarted","Data":"2483c2a0b0587c423b0d3b9b033ed31b1da224e9d62d76e8b364841645909c4c"} Nov 22 07:33:44 crc kubenswrapper[4853]: I1122 07:33:44.292507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" event={"ID":"f0d8fc3e-45fa-4672-8641-d88a56c44708","Type":"ContainerStarted","Data":"9ace74cadfe7105dcff72e407417d4e30444265ef92cebcc47765b9f9e83531b"} Nov 22 07:33:44 crc kubenswrapper[4853]: I1122 07:33:44.294812 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:44 crc kubenswrapper[4853]: I1122 07:33:44.297098 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" Nov 22 07:33:44 crc kubenswrapper[4853]: I1122 07:33:44.372766 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5b84778f4-fdshc" podStartSLOduration=2.143475888 podStartE2EDuration="12.372728986s" podCreationTimestamp="2025-11-22 07:33:32 +0000 UTC" firstStartedPulling="2025-11-22 07:33:33.857000598 +0000 UTC m=+1412.697623224" lastFinishedPulling="2025-11-22 07:33:44.086253706 +0000 UTC m=+1422.926876322" observedRunningTime="2025-11-22 07:33:44.367148115 +0000 UTC m=+1423.207770741" watchObservedRunningTime="2025-11-22 07:33:44.372728986 +0000 UTC m=+1423.213351612" Nov 22 07:34:01 crc kubenswrapper[4853]: I1122 07:34:01.298917 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:34:01 crc kubenswrapper[4853]: I1122 07:34:01.299976 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.385442 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.389877 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.392233 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-brdb4" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.402702 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.416458 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.424498 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.430302 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-xmrw2" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.478873 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.492616 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbfl\" (UniqueName: \"kubernetes.io/projected/f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43-kube-api-access-qpbfl\") pod \"barbican-operator-controller-manager-75fb479bcc-nf2bz\" (UID: \"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.548886 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.551670 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.561988 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-gk7zt" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.575977 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.577982 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.590680 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-v84w4" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.596160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbfl\" (UniqueName: \"kubernetes.io/projected/f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43-kube-api-access-qpbfl\") pod \"barbican-operator-controller-manager-75fb479bcc-nf2bz\" (UID: \"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.596250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb9p4\" (UniqueName: \"kubernetes.io/projected/74c3e58c-6a8f-462f-a595-28db25f9e2c5-kube-api-access-rb9p4\") pod \"cinder-operator-controller-manager-6498cbf48f-cjqxx\" (UID: \"74c3e58c-6a8f-462f-a595-28db25f9e2c5\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.653830 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.656712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbfl\" (UniqueName: \"kubernetes.io/projected/f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43-kube-api-access-qpbfl\") pod \"barbican-operator-controller-manager-75fb479bcc-nf2bz\" (UID: \"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43\") " pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.698143 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb9p4\" (UniqueName: \"kubernetes.io/projected/74c3e58c-6a8f-462f-a595-28db25f9e2c5-kube-api-access-rb9p4\") pod \"cinder-operator-controller-manager-6498cbf48f-cjqxx\" (UID: \"74c3e58c-6a8f-462f-a595-28db25f9e2c5\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.698256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k6pr\" (UniqueName: \"kubernetes.io/projected/7ed40441-44d2-497f-93e7-d85116790d61-kube-api-access-8k6pr\") pod \"designate-operator-controller-manager-767ccfd65f-pfmkd\" (UID: \"7ed40441-44d2-497f-93e7-d85116790d61\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.698312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzxlm\" (UniqueName: \"kubernetes.io/projected/9a6ac321-fea5-4011-9112-60695ec2d996-kube-api-access-fzxlm\") pod \"glance-operator-controller-manager-7969689c84-8kl4r\" (UID: \"9a6ac321-fea5-4011-9112-60695ec2d996\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.705529 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.721416 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.723304 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.727562 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-f89pd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.735766 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.743105 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb9p4\" (UniqueName: \"kubernetes.io/projected/74c3e58c-6a8f-462f-a595-28db25f9e2c5-kube-api-access-rb9p4\") pod \"cinder-operator-controller-manager-6498cbf48f-cjqxx\" (UID: \"74c3e58c-6a8f-462f-a595-28db25f9e2c5\") " pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.757339 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.792204 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.794683 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.802108 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k6pr\" (UniqueName: \"kubernetes.io/projected/7ed40441-44d2-497f-93e7-d85116790d61-kube-api-access-8k6pr\") pod \"designate-operator-controller-manager-767ccfd65f-pfmkd\" (UID: \"7ed40441-44d2-497f-93e7-d85116790d61\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.802202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzxlm\" (UniqueName: \"kubernetes.io/projected/9a6ac321-fea5-4011-9112-60695ec2d996-kube-api-access-fzxlm\") pod \"glance-operator-controller-manager-7969689c84-8kl4r\" (UID: \"9a6ac321-fea5-4011-9112-60695ec2d996\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.802398 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m2ch\" (UniqueName: \"kubernetes.io/projected/511dcee7-13c9-45ca-b12f-3330fb1b14bc-kube-api-access-9m2ch\") pod \"horizon-operator-controller-manager-598f69df5d-l654j\" (UID: \"511dcee7-13c9-45ca-b12f-3330fb1b14bc\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.813591 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-snj84" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.814120 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.815522 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.818812 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.826456 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.826980 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-6km7l" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.839870 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.841649 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.851490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k6pr\" (UniqueName: \"kubernetes.io/projected/7ed40441-44d2-497f-93e7-d85116790d61-kube-api-access-8k6pr\") pod \"designate-operator-controller-manager-767ccfd65f-pfmkd\" (UID: \"7ed40441-44d2-497f-93e7-d85116790d61\") " pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.853596 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kblsg" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.874825 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.882620 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.904588 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9m2ch\" (UniqueName: \"kubernetes.io/projected/511dcee7-13c9-45ca-b12f-3330fb1b14bc-kube-api-access-9m2ch\") pod \"horizon-operator-controller-manager-598f69df5d-l654j\" (UID: \"511dcee7-13c9-45ca-b12f-3330fb1b14bc\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.904669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqkm\" (UniqueName: \"kubernetes.io/projected/8a902288-c5fa-4106-89dc-dad1ed8fff47-kube-api-access-lxqkm\") pod \"ironic-operator-controller-manager-99b499f4-km4bs\" (UID: \"8a902288-c5fa-4106-89dc-dad1ed8fff47\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.904703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.904731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96rhj\" (UniqueName: \"kubernetes.io/projected/59095c24-fa32-4f44-b7d0-593b1291cf56-kube-api-access-96rhj\") pod \"heat-operator-controller-manager-56f54d6746-ww42j\" (UID: \"59095c24-fa32-4f44-b7d0-593b1291cf56\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.904784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jghnq\" (UniqueName: \"kubernetes.io/projected/674f240d-b9b1-488a-b6bf-d6231529cf4d-kube-api-access-jghnq\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.908937 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzxlm\" (UniqueName: \"kubernetes.io/projected/9a6ac321-fea5-4011-9112-60695ec2d996-kube-api-access-fzxlm\") pod \"glance-operator-controller-manager-7969689c84-8kl4r\" (UID: \"9a6ac321-fea5-4011-9112-60695ec2d996\") " pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.913903 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.936435 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.954343 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9m2ch\" (UniqueName: \"kubernetes.io/projected/511dcee7-13c9-45ca-b12f-3330fb1b14bc-kube-api-access-9m2ch\") pod \"horizon-operator-controller-manager-598f69df5d-l654j\" (UID: \"511dcee7-13c9-45ca-b12f-3330fb1b14bc\") " pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.960737 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h5674"] Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.962577 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:34:07 crc kubenswrapper[4853]: I1122 07:34:07.975572 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cr6cv" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.004837 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h5674"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.012735 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t48dj\" (UniqueName: \"kubernetes.io/projected/524f1308-44b0-4603-b612-eb02450cd46d-kube-api-access-t48dj\") pod \"keystone-operator-controller-manager-7454b96578-h5674\" (UID: \"524f1308-44b0-4603-b612-eb02450cd46d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.012830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxqkm\" (UniqueName: \"kubernetes.io/projected/8a902288-c5fa-4106-89dc-dad1ed8fff47-kube-api-access-lxqkm\") pod \"ironic-operator-controller-manager-99b499f4-km4bs\" (UID: \"8a902288-c5fa-4106-89dc-dad1ed8fff47\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.012906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.012964 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96rhj\" (UniqueName: \"kubernetes.io/projected/59095c24-fa32-4f44-b7d0-593b1291cf56-kube-api-access-96rhj\") pod \"heat-operator-controller-manager-56f54d6746-ww42j\" (UID: \"59095c24-fa32-4f44-b7d0-593b1291cf56\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.013050 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jghnq\" (UniqueName: \"kubernetes.io/projected/674f240d-b9b1-488a-b6bf-d6231529cf4d-kube-api-access-jghnq\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: E1122 07:34:08.014607 4853 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:34:08 crc kubenswrapper[4853]: E1122 07:34:08.014671 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert podName:674f240d-b9b1-488a-b6bf-d6231529cf4d nodeName:}" failed. No retries permitted until 2025-11-22 07:34:08.514645727 +0000 UTC m=+1447.355268353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert") pod "infra-operator-controller-manager-6dd8864d7c-4wjmn" (UID: "674f240d-b9b1-488a-b6bf-d6231529cf4d") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.016972 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.050376 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.052363 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.053257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96rhj\" (UniqueName: \"kubernetes.io/projected/59095c24-fa32-4f44-b7d0-593b1291cf56-kube-api-access-96rhj\") pod \"heat-operator-controller-manager-56f54d6746-ww42j\" (UID: \"59095c24-fa32-4f44-b7d0-593b1291cf56\") " pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.054715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxqkm\" (UniqueName: \"kubernetes.io/projected/8a902288-c5fa-4106-89dc-dad1ed8fff47-kube-api-access-lxqkm\") pod \"ironic-operator-controller-manager-99b499f4-km4bs\" (UID: \"8a902288-c5fa-4106-89dc-dad1ed8fff47\") " pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.055703 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.060182 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x27b5" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.071293 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.073461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jghnq\" (UniqueName: \"kubernetes.io/projected/674f240d-b9b1-488a-b6bf-d6231529cf4d-kube-api-access-jghnq\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.074277 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.086618 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-z2c6k" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.110575 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.112951 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.117675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t48dj\" (UniqueName: \"kubernetes.io/projected/524f1308-44b0-4603-b612-eb02450cd46d-kube-api-access-t48dj\") pod \"keystone-operator-controller-manager-7454b96578-h5674\" (UID: \"524f1308-44b0-4603-b612-eb02450cd46d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.117811 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxxn\" (UniqueName: \"kubernetes.io/projected/798dacb1-9a2f-4f77-a55e-1f005447a5ec-kube-api-access-dhxxn\") pod \"manila-operator-controller-manager-58f887965d-f4bvx\" (UID: \"798dacb1-9a2f-4f77-a55e-1f005447a5ec\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.117893 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgchb\" (UniqueName: \"kubernetes.io/projected/d1a5f3b8-6d7d-4955-9973-c743f0b16dc5-kube-api-access-vgchb\") pod \"mariadb-operator-controller-manager-54b5986bb8-hbqk8\" (UID: \"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.120783 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-t98gh" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.133367 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.140287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t48dj\" (UniqueName: \"kubernetes.io/projected/524f1308-44b0-4603-b612-eb02450cd46d-kube-api-access-t48dj\") pod \"keystone-operator-controller-manager-7454b96578-h5674\" (UID: \"524f1308-44b0-4603-b612-eb02450cd46d\") " pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.145266 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.147010 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.157201 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fjvxz" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.157879 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.201497 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.238179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhxxn\" (UniqueName: \"kubernetes.io/projected/798dacb1-9a2f-4f77-a55e-1f005447a5ec-kube-api-access-dhxxn\") pod \"manila-operator-controller-manager-58f887965d-f4bvx\" (UID: \"798dacb1-9a2f-4f77-a55e-1f005447a5ec\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.249447 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mt7b\" (UniqueName: \"kubernetes.io/projected/51d7517d-674b-4d91-bb05-89e11ce77ee8-kube-api-access-7mt7b\") pod \"nova-operator-controller-manager-cfbb9c588-gwqp2\" (UID: \"51d7517d-674b-4d91-bb05-89e11ce77ee8\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.247813 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.252471 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.277932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgchb\" (UniqueName: \"kubernetes.io/projected/d1a5f3b8-6d7d-4955-9973-c743f0b16dc5-kube-api-access-vgchb\") pod \"mariadb-operator-controller-manager-54b5986bb8-hbqk8\" (UID: \"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.278058 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vd4f\" (UniqueName: \"kubernetes.io/projected/05971821-7368-4352-8955-bd9432958c9b-kube-api-access-7vd4f\") pod \"neutron-operator-controller-manager-78bd47f458-mxwrm\" (UID: \"05971821-7368-4352-8955-bd9432958c9b\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.291326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhxxn\" (UniqueName: \"kubernetes.io/projected/798dacb1-9a2f-4f77-a55e-1f005447a5ec-kube-api-access-dhxxn\") pod \"manila-operator-controller-manager-58f887965d-f4bvx\" (UID: \"798dacb1-9a2f-4f77-a55e-1f005447a5ec\") " pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.307078 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.362219 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgchb\" (UniqueName: \"kubernetes.io/projected/d1a5f3b8-6d7d-4955-9973-c743f0b16dc5-kube-api-access-vgchb\") pod \"mariadb-operator-controller-manager-54b5986bb8-hbqk8\" (UID: \"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5\") " pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.371458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.382726 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.382790 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.382898 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.386321 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-fnf7c" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.409171 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.415988 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vd4f\" (UniqueName: \"kubernetes.io/projected/05971821-7368-4352-8955-bd9432958c9b-kube-api-access-7vd4f\") pod \"neutron-operator-controller-manager-78bd47f458-mxwrm\" (UID: \"05971821-7368-4352-8955-bd9432958c9b\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.416697 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mt7b\" (UniqueName: \"kubernetes.io/projected/51d7517d-674b-4d91-bb05-89e11ce77ee8-kube-api-access-7mt7b\") pod \"nova-operator-controller-manager-cfbb9c588-gwqp2\" (UID: \"51d7517d-674b-4d91-bb05-89e11ce77ee8\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.446260 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vd4f\" (UniqueName: \"kubernetes.io/projected/05971821-7368-4352-8955-bd9432958c9b-kube-api-access-7vd4f\") pod \"neutron-operator-controller-manager-78bd47f458-mxwrm\" (UID: \"05971821-7368-4352-8955-bd9432958c9b\") " pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.453611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mt7b\" (UniqueName: \"kubernetes.io/projected/51d7517d-674b-4d91-bb05-89e11ce77ee8-kube-api-access-7mt7b\") pod \"nova-operator-controller-manager-cfbb9c588-gwqp2\" (UID: \"51d7517d-674b-4d91-bb05-89e11ce77ee8\") " pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.482716 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.502070 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.506526 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.511448 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lnfp8" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.520415 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.528602 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.528925 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.529075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65f7z\" (UniqueName: \"kubernetes.io/projected/6e35d4a2-bb72-4396-83e0-4a9ba4d9274b-kube-api-access-65f7z\") pod \"octavia-operator-controller-manager-54cfbf4c7d-vsftr\" (UID: \"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.538023 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.538796 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/674f240d-b9b1-488a-b6bf-d6231529cf4d-cert\") pod \"infra-operator-controller-manager-6dd8864d7c-4wjmn\" (UID: \"674f240d-b9b1-488a-b6bf-d6231529cf4d\") " pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.540576 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.541202 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-c49dl" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.570559 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.581947 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.590971 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.607620 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.611053 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.613927 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-k9kpw" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.634023 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.634799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65f7z\" (UniqueName: \"kubernetes.io/projected/6e35d4a2-bb72-4396-83e0-4a9ba4d9274b-kube-api-access-65f7z\") pod \"octavia-operator-controller-manager-54cfbf4c7d-vsftr\" (UID: \"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.634848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqnq4\" (UniqueName: \"kubernetes.io/projected/ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4-kube-api-access-wqnq4\") pod \"ovn-operator-controller-manager-54fc5f65b7-cm2jj\" (UID: \"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.634898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58s45\" (UniqueName: \"kubernetes.io/projected/8774b599-7d20-4c58-9441-821beca48884-kube-api-access-58s45\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.634950 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8774b599-7d20-4c58-9441-821beca48884-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.635695 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.640995 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-74tv2" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.641585 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.646903 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.658690 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.662729 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.662804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65f7z\" (UniqueName: \"kubernetes.io/projected/6e35d4a2-bb72-4396-83e0-4a9ba4d9274b-kube-api-access-65f7z\") pod \"octavia-operator-controller-manager-54cfbf4c7d-vsftr\" (UID: \"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b\") " pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.666122 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-d9clt" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.670218 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.679025 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.710582 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-wmm95"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.712193 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.716658 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-d6p7m" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736659 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqnq4\" (UniqueName: \"kubernetes.io/projected/ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4-kube-api-access-wqnq4\") pod \"ovn-operator-controller-manager-54fc5f65b7-cm2jj\" (UID: \"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkdns\" (UniqueName: \"kubernetes.io/projected/242375a1-78b5-4540-9e93-ad4ef21b67c8-kube-api-access-lkdns\") pod \"placement-operator-controller-manager-5b797b8dff-5lw59\" (UID: \"242375a1-78b5-4540-9e93-ad4ef21b67c8\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58s45\" (UniqueName: \"kubernetes.io/projected/8774b599-7d20-4c58-9441-821beca48884-kube-api-access-58s45\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzc4z\" (UniqueName: \"kubernetes.io/projected/e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e-kube-api-access-kzc4z\") pod \"telemetry-operator-controller-manager-b477b5977-7gkdk\" (UID: \"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e\") " pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736868 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8774b599-7d20-4c58-9441-821beca48884-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.736899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkxw9\" (UniqueName: \"kubernetes.io/projected/c82379b6-72f2-4474-8714-64f9e6ea7bf7-kube-api-access-bkxw9\") pod \"swift-operator-controller-manager-d656998f4-fgtmg\" (UID: \"c82379b6-72f2-4474-8714-64f9e6ea7bf7\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.746091 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8774b599-7d20-4c58-9441-821beca48884-cert\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.748557 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.750426 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.760114 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vq9gz" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.761944 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.771813 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqnq4\" (UniqueName: \"kubernetes.io/projected/ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4-kube-api-access-wqnq4\") pod \"ovn-operator-controller-manager-54fc5f65b7-cm2jj\" (UID: \"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4\") " pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.776707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58s45\" (UniqueName: \"kubernetes.io/projected/8774b599-7d20-4c58-9441-821beca48884-kube-api-access-58s45\") pod \"openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d\" (UID: \"8774b599-7d20-4c58-9441-821beca48884\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.796301 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.824718 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-wmm95"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.838412 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2rd\" (UniqueName: \"kubernetes.io/projected/67499981-fc7e-4b6d-ab2b-46b528a165a5-kube-api-access-hp2rd\") pod \"watcher-operator-controller-manager-8c6448b9f-fcl7j\" (UID: \"67499981-fc7e-4b6d-ab2b-46b528a165a5\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.838470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctns\" (UniqueName: \"kubernetes.io/projected/46e379f1-feb8-460a-8448-066bb8f54330-kube-api-access-7ctns\") pod \"test-operator-controller-manager-b4c496f69-wmm95\" (UID: \"46e379f1-feb8-460a-8448-066bb8f54330\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.838496 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkdns\" (UniqueName: \"kubernetes.io/projected/242375a1-78b5-4540-9e93-ad4ef21b67c8-kube-api-access-lkdns\") pod \"placement-operator-controller-manager-5b797b8dff-5lw59\" (UID: \"242375a1-78b5-4540-9e93-ad4ef21b67c8\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.838532 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzc4z\" (UniqueName: \"kubernetes.io/projected/e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e-kube-api-access-kzc4z\") pod \"telemetry-operator-controller-manager-b477b5977-7gkdk\" (UID: \"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e\") " pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.838605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkxw9\" (UniqueName: \"kubernetes.io/projected/c82379b6-72f2-4474-8714-64f9e6ea7bf7-kube-api-access-bkxw9\") pod \"swift-operator-controller-manager-d656998f4-fgtmg\" (UID: \"c82379b6-72f2-4474-8714-64f9e6ea7bf7\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.862170 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzc4z\" (UniqueName: \"kubernetes.io/projected/e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e-kube-api-access-kzc4z\") pod \"telemetry-operator-controller-manager-b477b5977-7gkdk\" (UID: \"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e\") " pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.862176 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkdns\" (UniqueName: \"kubernetes.io/projected/242375a1-78b5-4540-9e93-ad4ef21b67c8-kube-api-access-lkdns\") pod \"placement-operator-controller-manager-5b797b8dff-5lw59\" (UID: \"242375a1-78b5-4540-9e93-ad4ef21b67c8\") " pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.863116 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.867997 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkxw9\" (UniqueName: \"kubernetes.io/projected/c82379b6-72f2-4474-8714-64f9e6ea7bf7-kube-api-access-bkxw9\") pod \"swift-operator-controller-manager-d656998f4-fgtmg\" (UID: \"c82379b6-72f2-4474-8714-64f9e6ea7bf7\") " pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.911811 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.926376 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.931907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:08 crc kubenswrapper[4853]: W1122 07:34:08.938317 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0fa8b73_0604_41c5_9dfd_ea2f3ca36c43.slice/crio-c26828b7d982728da21a6e6cac1b1323839a280334fc3983eb0960c3c818f701 WatchSource:0}: Error finding container c26828b7d982728da21a6e6cac1b1323839a280334fc3983eb0960c3c818f701: Status 404 returned error can't find the container with id c26828b7d982728da21a6e6cac1b1323839a280334fc3983eb0960c3c818f701 Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.938546 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.938587 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-vcps5" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.940303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp2rd\" (UniqueName: \"kubernetes.io/projected/67499981-fc7e-4b6d-ab2b-46b528a165a5-kube-api-access-hp2rd\") pod \"watcher-operator-controller-manager-8c6448b9f-fcl7j\" (UID: \"67499981-fc7e-4b6d-ab2b-46b528a165a5\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.940353 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctns\" (UniqueName: \"kubernetes.io/projected/46e379f1-feb8-460a-8448-066bb8f54330-kube-api-access-7ctns\") pod \"test-operator-controller-manager-b4c496f69-wmm95\" (UID: \"46e379f1-feb8-460a-8448-066bb8f54330\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.947223 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.966673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctns\" (UniqueName: \"kubernetes.io/projected/46e379f1-feb8-460a-8448-066bb8f54330-kube-api-access-7ctns\") pod \"test-operator-controller-manager-b4c496f69-wmm95\" (UID: \"46e379f1-feb8-460a-8448-066bb8f54330\") " pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.966739 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp2rd\" (UniqueName: \"kubernetes.io/projected/67499981-fc7e-4b6d-ab2b-46b528a165a5-kube-api-access-hp2rd\") pod \"watcher-operator-controller-manager-8c6448b9f-fcl7j\" (UID: \"67499981-fc7e-4b6d-ab2b-46b528a165a5\") " pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.990616 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65"] Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.992400 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" Nov 22 07:34:08 crc kubenswrapper[4853]: I1122 07:34:08.996793 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mhhpx" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.001995 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.041926 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.041986 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvxpg\" (UniqueName: \"kubernetes.io/projected/131c2522-8c48-4c18-9a39-99a66b87b9ed-kube-api-access-wvxpg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fdt65\" (UID: \"131c2522-8c48-4c18-9a39-99a66b87b9ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.042071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwwb\" (UniqueName: \"kubernetes.io/projected/b41bf5e6-516e-40b8-9628-bb2f056af5ad-kube-api-access-rnwwb\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.042506 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.069532 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.138359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.148183 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.148284 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvxpg\" (UniqueName: \"kubernetes.io/projected/131c2522-8c48-4c18-9a39-99a66b87b9ed-kube-api-access-wvxpg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fdt65\" (UID: \"131c2522-8c48-4c18-9a39-99a66b87b9ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.148494 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnwwb\" (UniqueName: \"kubernetes.io/projected/b41bf5e6-516e-40b8-9628-bb2f056af5ad-kube-api-access-rnwwb\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: E1122 07:34:09.150264 4853 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 22 07:34:09 crc kubenswrapper[4853]: E1122 07:34:09.150324 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert podName:b41bf5e6-516e-40b8-9628-bb2f056af5ad nodeName:}" failed. No retries permitted until 2025-11-22 07:34:09.65029804 +0000 UTC m=+1448.490920666 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert") pod "openstack-operator-controller-manager-88b7b5d44-zjv7m" (UID: "b41bf5e6-516e-40b8-9628-bb2f056af5ad") : secret "webhook-server-cert" not found Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.166757 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.180301 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.186477 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.188869 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.189087 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvxpg\" (UniqueName: \"kubernetes.io/projected/131c2522-8c48-4c18-9a39-99a66b87b9ed-kube-api-access-wvxpg\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-fdt65\" (UID: \"131c2522-8c48-4c18-9a39-99a66b87b9ed\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.196191 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnwwb\" (UniqueName: \"kubernetes.io/projected/b41bf5e6-516e-40b8-9628-bb2f056af5ad-kube-api-access-rnwwb\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.244966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.340357 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.465651 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd"] Nov 22 07:34:09 crc kubenswrapper[4853]: W1122 07:34:09.635158 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ed40441_44d2_497f_93e7_d85116790d61.slice/crio-74dea6d58cf6ab5127f15f7b43490e36276af6e6c1b6140305b7e643a1065d57 WatchSource:0}: Error finding container 74dea6d58cf6ab5127f15f7b43490e36276af6e6c1b6140305b7e643a1065d57: Status 404 returned error can't find the container with id 74dea6d58cf6ab5127f15f7b43490e36276af6e6c1b6140305b7e643a1065d57 Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.655180 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.669021 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" event={"ID":"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43","Type":"ContainerStarted","Data":"c26828b7d982728da21a6e6cac1b1323839a280334fc3983eb0960c3c818f701"} Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.675115 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" event={"ID":"74c3e58c-6a8f-462f-a595-28db25f9e2c5","Type":"ContainerStarted","Data":"49258664a16470dac4a33cf9e2e08cf16f60d1c0a5e62aacab2e63f99753cd81"} Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.678303 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" event={"ID":"9a6ac321-fea5-4011-9112-60695ec2d996","Type":"ContainerStarted","Data":"3794e6cc3bb5fff71cb4fc9cf9c03f9ed24da349a661e604dfa088e9b1671391"} Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.679543 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.698794 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b41bf5e6-516e-40b8-9628-bb2f056af5ad-cert\") pod \"openstack-operator-controller-manager-88b7b5d44-zjv7m\" (UID: \"b41bf5e6-516e-40b8-9628-bb2f056af5ad\") " pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.747261 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j"] Nov 22 07:34:09 crc kubenswrapper[4853]: W1122 07:34:09.758899 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59095c24_fa32_4f44_b7d0_593b1291cf56.slice/crio-b008a0182677d63605543ddc06499b2b73caf5731644664ce79ef4253b2cf6de WatchSource:0}: Error finding container b008a0182677d63605543ddc06499b2b73caf5731644664ce79ef4253b2cf6de: Status 404 returned error can't find the container with id b008a0182677d63605543ddc06499b2b73caf5731644664ce79ef4253b2cf6de Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.774461 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7454b96578-h5674"] Nov 22 07:34:09 crc kubenswrapper[4853]: I1122 07:34:09.802642 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.129665 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.142481 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.170174 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.192975 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.209043 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn"] Nov 22 07:34:10 crc kubenswrapper[4853]: W1122 07:34:10.252799 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod674f240d_b9b1_488a_b6bf_d6231529cf4d.slice/crio-8aaea90bd71885c87fe79a439c944e84b7c43953254cd528a9c7abb4e4f03e1c WatchSource:0}: Error finding container 8aaea90bd71885c87fe79a439c944e84b7c43953254cd528a9c7abb4e4f03e1c: Status 404 returned error can't find the container with id 8aaea90bd71885c87fe79a439c944e84b7c43953254cd528a9c7abb4e4f03e1c Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.460501 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.487289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.512670 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2"] Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.691418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" event={"ID":"524f1308-44b0-4603-b612-eb02450cd46d","Type":"ContainerStarted","Data":"204a1147e3e11c098166b95d04bd5f0416d4a60a3002543c64fadc3a74009523"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.696167 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" event={"ID":"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5","Type":"ContainerStarted","Data":"9b8b98d9258abf14672ac501d5e082c6d42549752c734a9242ed70d99ad6604a"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.699760 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" event={"ID":"7ed40441-44d2-497f-93e7-d85116790d61","Type":"ContainerStarted","Data":"74dea6d58cf6ab5127f15f7b43490e36276af6e6c1b6140305b7e643a1065d57"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.701263 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" event={"ID":"798dacb1-9a2f-4f77-a55e-1f005447a5ec","Type":"ContainerStarted","Data":"a12b5d1fbaa531d8c1b3938177a737ea8e28eea56a2719a255eb0090d88a469e"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.702717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" event={"ID":"8774b599-7d20-4c58-9441-821beca48884","Type":"ContainerStarted","Data":"cfdb98465514a86a804cd3ee9e385be34f2eb8ce6bac78f4624ccd61ec627e8d"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.703932 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" event={"ID":"511dcee7-13c9-45ca-b12f-3330fb1b14bc","Type":"ContainerStarted","Data":"07add134cd584f6302a584f9fcfe8b117708fbee0a1a1dbc28db7fcde665698f"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.706196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" event={"ID":"51d7517d-674b-4d91-bb05-89e11ce77ee8","Type":"ContainerStarted","Data":"f965d6297a8670009e89fd64c999d40eb90ab1db43a8f736a824eacdbb19819f"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.707805 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" event={"ID":"59095c24-fa32-4f44-b7d0-593b1291cf56","Type":"ContainerStarted","Data":"b008a0182677d63605543ddc06499b2b73caf5731644664ce79ef4253b2cf6de"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.709452 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" event={"ID":"05971821-7368-4352-8955-bd9432958c9b","Type":"ContainerStarted","Data":"7d717e201fefe98e96cf9a200c85cc5961cbc4f6fc87f30c557346d592dda0c8"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.710940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" event={"ID":"674f240d-b9b1-488a-b6bf-d6231529cf4d","Type":"ContainerStarted","Data":"8aaea90bd71885c87fe79a439c944e84b7c43953254cd528a9c7abb4e4f03e1c"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.712244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" event={"ID":"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b","Type":"ContainerStarted","Data":"bf083f5b68e2134b5662bf93ad3ae90c62b6083eb35b0f56e0456a09883a741b"} Nov 22 07:34:10 crc kubenswrapper[4853]: I1122 07:34:10.717772 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" event={"ID":"8a902288-c5fa-4106-89dc-dad1ed8fff47","Type":"ContainerStarted","Data":"bb6ba677c829246f7545837991170fb8ba02c58101028e45224df329e1572f01"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.045944 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-b4c496f69-wmm95"] Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.060851 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j"] Nov 22 07:34:11 crc kubenswrapper[4853]: W1122 07:34:11.063671 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67499981_fc7e_4b6d_ab2b_46b528a165a5.slice/crio-1eef5e7cbe4b85c4539c2688dddb137ed28b6b4a2054671ae224d3c95797818a WatchSource:0}: Error finding container 1eef5e7cbe4b85c4539c2688dddb137ed28b6b4a2054671ae224d3c95797818a: Status 404 returned error can't find the container with id 1eef5e7cbe4b85c4539c2688dddb137ed28b6b4a2054671ae224d3c95797818a Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.090377 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65"] Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.108155 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.222:5001/openstack-k8s-operators/telemetry-operator:ebbc73a2023d23fddfc72f63cda3380471803e12,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzc4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-b477b5977-7gkdk_openstack-operators(e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.112367 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m"] Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.117801 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqnq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-cm2jj_openstack-operators(ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.118720 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkdns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b797b8dff-5lw59_openstack-operators(242375a1-78b5-4540-9e93-ad4ef21b67c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.145129 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg"] Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.168729 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk"] Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.181806 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj"] Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.189425 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59"] Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.620469 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podUID="ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4" Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.733167 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" podUID="e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e" Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.736356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" event={"ID":"242375a1-78b5-4540-9e93-ad4ef21b67c8","Type":"ContainerStarted","Data":"2f52a795a63fceb9fc5894ccd8aa218db7d5c61183ad99226e436c702fa6e606"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.736526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" event={"ID":"242375a1-78b5-4540-9e93-ad4ef21b67c8","Type":"ContainerStarted","Data":"8b58ea1d086b661faeef5cc115882be5a50a8dc37c4c631c04ef431cd1078a97"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.738828 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" event={"ID":"c82379b6-72f2-4474-8714-64f9e6ea7bf7","Type":"ContainerStarted","Data":"8feb59d40cc89126e7346272add6ba4926ccc8425804c72363dba6b2ebb3e90d"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.741866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" event={"ID":"b41bf5e6-516e-40b8-9628-bb2f056af5ad","Type":"ContainerStarted","Data":"dfa0d02a062b9d9390340eb987f0924d99cee40c45f163bdf45bb1739d951a64"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.741930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" event={"ID":"b41bf5e6-516e-40b8-9628-bb2f056af5ad","Type":"ContainerStarted","Data":"1975011471f58bb1e86bd8726358a1bccbca732f238ee05cbfd9fed6d28c0856"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.744208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" event={"ID":"131c2522-8c48-4c18-9a39-99a66b87b9ed","Type":"ContainerStarted","Data":"f6545cd368d531f0b42746995b889ed4543adbfc5696c6aad8a9ffad5f9fdb14"} Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.753567 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podUID="ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4" Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.763733 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" event={"ID":"67499981-fc7e-4b6d-ab2b-46b528a165a5","Type":"ContainerStarted","Data":"1eef5e7cbe4b85c4539c2688dddb137ed28b6b4a2054671ae224d3c95797818a"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.763910 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" event={"ID":"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4","Type":"ContainerStarted","Data":"8c5b17ceffd499dce20dc3db98e2fd3c4217f28dfe7761ebddea7ec901d20aee"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.764055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" event={"ID":"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4","Type":"ContainerStarted","Data":"38d90591577b9638c42ed11e34bdc3dd450b3d096c1bd9cd8cbed3c8c98f93c6"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.764173 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" event={"ID":"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e","Type":"ContainerStarted","Data":"6c2a3f645c4eb6a316aad42932bc5c625bcf4a70a4337f7b53e49a5994808fe4"} Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.764244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" event={"ID":"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e","Type":"ContainerStarted","Data":"bf3aff7def5b74aed3bb1cf973dd3142fde33761a426b7f8b484a58371c2a30a"} Nov 22 07:34:11 crc kubenswrapper[4853]: E1122 07:34:11.764828 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.222:5001/openstack-k8s-operators/telemetry-operator:ebbc73a2023d23fddfc72f63cda3380471803e12\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" podUID="e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e" Nov 22 07:34:11 crc kubenswrapper[4853]: I1122 07:34:11.765891 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" event={"ID":"46e379f1-feb8-460a-8448-066bb8f54330","Type":"ContainerStarted","Data":"4dc4bf1f839d0803f034cbf34e08a35528d216d42add637d875e5a7e16b516a0"} Nov 22 07:34:12 crc kubenswrapper[4853]: E1122 07:34:12.188560 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" podUID="242375a1-78b5-4540-9e93-ad4ef21b67c8" Nov 22 07:34:12 crc kubenswrapper[4853]: I1122 07:34:12.779539 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" event={"ID":"b41bf5e6-516e-40b8-9628-bb2f056af5ad","Type":"ContainerStarted","Data":"78dbf4bc7ad715deae5e5d99b7de02dd6d630125575125d12f563860be272a67"} Nov 22 07:34:12 crc kubenswrapper[4853]: E1122 07:34:12.782723 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podUID="ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4" Nov 22 07:34:12 crc kubenswrapper[4853]: E1122 07:34:12.783738 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" podUID="242375a1-78b5-4540-9e93-ad4ef21b67c8" Nov 22 07:34:12 crc kubenswrapper[4853]: E1122 07:34:12.783848 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.222:5001/openstack-k8s-operators/telemetry-operator:ebbc73a2023d23fddfc72f63cda3380471803e12\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" podUID="e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e" Nov 22 07:34:13 crc kubenswrapper[4853]: I1122 07:34:13.787465 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:13 crc kubenswrapper[4853]: I1122 07:34:13.825829 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" podStartSLOduration=5.825787477 podStartE2EDuration="5.825787477s" podCreationTimestamp="2025-11-22 07:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:34:13.814006347 +0000 UTC m=+1452.654628973" watchObservedRunningTime="2025-11-22 07:34:13.825787477 +0000 UTC m=+1452.666410113" Nov 22 07:34:19 crc kubenswrapper[4853]: I1122 07:34:19.810539 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-88b7b5d44-zjv7m" Nov 22 07:34:31 crc kubenswrapper[4853]: I1122 07:34:31.298101 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:34:31 crc kubenswrapper[4853]: I1122 07:34:31.299148 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:34:31 crc kubenswrapper[4853]: I1122 07:34:31.299227 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:34:31 crc kubenswrapper[4853]: I1122 07:34:31.300563 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:34:31 crc kubenswrapper[4853]: I1122 07:34:31.300645 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7" gracePeriod=600 Nov 22 07:34:37 crc kubenswrapper[4853]: I1122 07:34:33.027250 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7" exitCode=0 Nov 22 07:34:37 crc kubenswrapper[4853]: I1122 07:34:33.027336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7"} Nov 22 07:34:37 crc kubenswrapper[4853]: I1122 07:34:33.027665 4853 scope.go:117] "RemoveContainer" containerID="c00f978e65a6d1e77a568c918905dcabf620ebbd24981dc536007d357d44ae2e" Nov 22 07:34:42 crc kubenswrapper[4853]: E1122 07:34:42.896836 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9" Nov 22 07:34:42 crc kubenswrapper[4853]: E1122 07:34:42.897979 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:848f4c43c6bdd4e33e3ce1d147a85b9b6a6124a150bd5155dce421ef539259e9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m2ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-598f69df5d-l654j_openstack-operators(511dcee7-13c9-45ca-b12f-3330fb1b14bc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:45 crc kubenswrapper[4853]: E1122 07:34:45.456437 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a" Nov 22 07:34:45 crc kubenswrapper[4853]: E1122 07:34:45.457212 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:3ef72bbd7cce89ff54d850ff44ca6d7b2360834a502da3d561aeb6fd3d9af50a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t48dj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7454b96578-h5674_openstack-operators(524f1308-44b0-4603-b612-eb02450cd46d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.671317 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.677175 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.690728 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.778290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.778445 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx4h8\" (UniqueName: \"kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.778495 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.880583 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx4h8\" (UniqueName: \"kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.880669 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.880811 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.881493 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.881559 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:50 crc kubenswrapper[4853]: I1122 07:34:50.900520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx4h8\" (UniqueName: \"kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8\") pod \"redhat-marketplace-vrxhq\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:51 crc kubenswrapper[4853]: I1122 07:34:51.038676 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:34:51 crc kubenswrapper[4853]: E1122 07:34:51.573744 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 22 07:34:51 crc kubenswrapper[4853]: E1122 07:34:51.574321 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-65f7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-54cfbf4c7d-vsftr_openstack-operators(6e35d4a2-bb72-4396-83e0-4a9ba4d9274b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:52 crc kubenswrapper[4853]: E1122 07:34:52.605838 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f" Nov 22 07:34:52 crc kubenswrapper[4853]: E1122 07:34:52.606146 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c6405d94e56b40ef669729216ab4b9c441f34bb280902efa2940038c076b560f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8k6pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-767ccfd65f-pfmkd_openstack-operators(7ed40441-44d2-497f-93e7-d85116790d61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:56 crc kubenswrapper[4853]: E1122 07:34:56.189502 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a" Nov 22 07:34:56 crc kubenswrapper[4853]: E1122 07:34:56.190707 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:b749a5dd8bc718875c3f5e81b38d54d003be77ab92de4a3e9f9595566496a58a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhxxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58f887965d-f4bvx_openstack-operators(798dacb1-9a2f-4f77-a55e-1f005447a5ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:56 crc kubenswrapper[4853]: E1122 07:34:56.703240 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 22 07:34:56 crc kubenswrapper[4853]: E1122 07:34:56.703426 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ctns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-b4c496f69-wmm95_openstack-operators(46e379f1-feb8-460a-8448-066bb8f54330): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:57 crc kubenswrapper[4853]: E1122 07:34:57.124405 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 22 07:34:57 crc kubenswrapper[4853]: E1122 07:34:57.124612 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7vd4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78bd47f458-mxwrm_openstack-operators(05971821-7368-4352-8955-bd9432958c9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:57 crc kubenswrapper[4853]: E1122 07:34:57.700843 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377" Nov 22 07:34:57 crc kubenswrapper[4853]: E1122 07:34:57.701626 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:b582189b55fddc180a6d468c9dba7078009a693db37b4093d4ba0c99ec675377,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lxqkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-99b499f4-km4bs_openstack-operators(8a902288-c5fa-4106-89dc-dad1ed8fff47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.297953 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.298652 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wvxpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-fdt65_openstack-operators(131c2522-8c48-4c18-9a39-99a66b87b9ed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.301881 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" podUID="131c2522-8c48-4c18-9a39-99a66b87b9ed" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.900423 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.900838 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wqnq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-54fc5f65b7-cm2jj_openstack-operators(ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:34:59 crc kubenswrapper[4853]: E1122 07:34:59.902041 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podUID="ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4" Nov 22 07:35:00 crc kubenswrapper[4853]: E1122 07:35:00.311522 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" podUID="131c2522-8c48-4c18-9a39-99a66b87b9ed" Nov 22 07:35:03 crc kubenswrapper[4853]: E1122 07:35:03.263292 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 22 07:35:03 crc kubenswrapper[4853]: E1122 07:35:03.263885 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hp2rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-8c6448b9f-fcl7j_openstack-operators(67499981-fc7e-4b6d-ab2b-46b528a165a5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:35:03 crc kubenswrapper[4853]: E1122 07:35:03.919892 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" podUID="511dcee7-13c9-45ca-b12f-3330fb1b14bc" Nov 22 07:35:04 crc kubenswrapper[4853]: E1122 07:35:04.172266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" podUID="6e35d4a2-bb72-4396-83e0-4a9ba4d9274b" Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.306703 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.360971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" event={"ID":"74c3e58c-6a8f-462f-a595-28db25f9e2c5","Type":"ContainerStarted","Data":"147e24d88f93ef323b4ce744743b6bd569be04e92d0a43da42721e90a719abbb"} Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.375250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" event={"ID":"511dcee7-13c9-45ca-b12f-3330fb1b14bc","Type":"ContainerStarted","Data":"3e3406f87053a66f60578d7c7d5b0271936e785c83dca2770943691b850cb107"} Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.386238 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:35:04 crc kubenswrapper[4853]: E1122 07:35:04.405572 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" podUID="524f1308-44b0-4603-b612-eb02450cd46d" Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.406013 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" event={"ID":"524f1308-44b0-4603-b612-eb02450cd46d","Type":"ContainerStarted","Data":"332baf9b9840bfdb9e9253dfe52ad38aae4f8142cb3978a59f9c92d3b5b585c1"} Nov 22 07:35:04 crc kubenswrapper[4853]: I1122 07:35:04.435093 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" event={"ID":"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b","Type":"ContainerStarted","Data":"bceadc6794852bb58705bb55658b30ac680b7e93178d0e39b4c4f4b22ea129e5"} Nov 22 07:35:04 crc kubenswrapper[4853]: W1122 07:35:04.532627 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5684cfc_746c_4c02_b8b8_39f0b57d62b6.slice/crio-04382c3a4f626f3e2d2933670300e6a9d21b6049799ab8e468e2acf299a8c3c6 WatchSource:0}: Error finding container 04382c3a4f626f3e2d2933670300e6a9d21b6049799ab8e468e2acf299a8c3c6: Status 404 returned error can't find the container with id 04382c3a4f626f3e2d2933670300e6a9d21b6049799ab8e468e2acf299a8c3c6 Nov 22 07:35:04 crc kubenswrapper[4853]: E1122 07:35:04.623403 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" podUID="7ed40441-44d2-497f-93e7-d85116790d61" Nov 22 07:35:04 crc kubenswrapper[4853]: E1122 07:35:04.650207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" podUID="8a902288-c5fa-4106-89dc-dad1ed8fff47" Nov 22 07:35:04 crc kubenswrapper[4853]: E1122 07:35:04.666475 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" podUID="798dacb1-9a2f-4f77-a55e-1f005447a5ec" Nov 22 07:35:05 crc kubenswrapper[4853]: E1122 07:35:05.046336 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" podUID="05971821-7368-4352-8955-bd9432958c9b" Nov 22 07:35:05 crc kubenswrapper[4853]: E1122 07:35:05.066090 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" podUID="46e379f1-feb8-460a-8448-066bb8f54330" Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.447899 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" event={"ID":"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43","Type":"ContainerStarted","Data":"ae967c597dbf8dd4bceed2c9b1c79fb553b7ee7a22cd3fa7b1d99a03fe1584b6"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.450977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" event={"ID":"05971821-7368-4352-8955-bd9432958c9b","Type":"ContainerStarted","Data":"b220340a33708b221876b2c88898892fedd8f93a785077a379fa5cf74f68d012"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.457542 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.463238 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" event={"ID":"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5","Type":"ContainerStarted","Data":"25716fc2a3bce98131334d4727dbc8cf2be4cb1d28979e77c78e9e482dec98fb"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.469607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" event={"ID":"7ed40441-44d2-497f-93e7-d85116790d61","Type":"ContainerStarted","Data":"62de5fe745db73561bb20c354a9f27c70cc2835f1a0861fc8ef4cacf0a9d1d28"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.475883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" event={"ID":"c82379b6-72f2-4474-8714-64f9e6ea7bf7","Type":"ContainerStarted","Data":"501ea3b279d9ab42281b7ebd9c2ac4c23dd28124366d97b7d5c6822b4fa4b68b"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.478546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" event={"ID":"798dacb1-9a2f-4f77-a55e-1f005447a5ec","Type":"ContainerStarted","Data":"2fd90d177c5d12be5e813cfff400b0476a029691730f92592afb66cf74412780"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.482423 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" event={"ID":"8a902288-c5fa-4106-89dc-dad1ed8fff47","Type":"ContainerStarted","Data":"0416042a834bb58668780e81f623e9a5533b17dc40f482b0329d27bfae209670"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.488240 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" event={"ID":"51d7517d-674b-4d91-bb05-89e11ce77ee8","Type":"ContainerStarted","Data":"7cd29fef59fc2a7befcb0200ae6add6a206d497320df74c69bd3562c3e82386e"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.494426 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerStarted","Data":"04382c3a4f626f3e2d2933670300e6a9d21b6049799ab8e468e2acf299a8c3c6"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.499501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" event={"ID":"46e379f1-feb8-460a-8448-066bb8f54330","Type":"ContainerStarted","Data":"9db66e5c6f8621fa928c60b15e5fe1f83267bbad3a265e7d8fb9886b0f0595ea"} Nov 22 07:35:05 crc kubenswrapper[4853]: I1122 07:35:05.507771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" event={"ID":"9a6ac321-fea5-4011-9112-60695ec2d996","Type":"ContainerStarted","Data":"fc169793173b5ce117134d161562671c23e8d2711f0d2ac8ff78231008d344e7"} Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.520413 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" event={"ID":"74c3e58c-6a8f-462f-a595-28db25f9e2c5","Type":"ContainerStarted","Data":"dbb63153bbd0794809248b0da9ad1b0592b3f17105bd20c758fa51c32272e41e"} Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.522397 4853 generic.go:334] "Generic (PLEG): container finished" podID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerID="6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3" exitCode=0 Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.522468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerDied","Data":"6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3"} Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.525008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" event={"ID":"d1a5f3b8-6d7d-4955-9973-c743f0b16dc5","Type":"ContainerStarted","Data":"2624c341860e9658dc032b50371a46a9a4fa53d3d054031bf401e2fd15f22108"} Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.525135 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.527378 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" event={"ID":"674f240d-b9b1-488a-b6bf-d6231529cf4d","Type":"ContainerStarted","Data":"f3ea3759dfca12a973c49724cd0076546b395401ac27358d68d51b8f3d1e0543"} Nov 22 07:35:06 crc kubenswrapper[4853]: I1122 07:35:06.578184 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" podStartSLOduration=12.667453154 podStartE2EDuration="59.578161314s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.195521214 +0000 UTC m=+1449.036143840" lastFinishedPulling="2025-11-22 07:34:57.106229374 +0000 UTC m=+1495.946852000" observedRunningTime="2025-11-22 07:35:06.56920662 +0000 UTC m=+1505.409829246" watchObservedRunningTime="2025-11-22 07:35:06.578161314 +0000 UTC m=+1505.418783940" Nov 22 07:35:08 crc kubenswrapper[4853]: I1122 07:35:08.544761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" event={"ID":"242375a1-78b5-4540-9e93-ad4ef21b67c8","Type":"ContainerStarted","Data":"3729dd93a0cee0afcc123903a6c82abcefe432f3788435715a6b070b2c63b43a"} Nov 22 07:35:08 crc kubenswrapper[4853]: I1122 07:35:08.548526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" event={"ID":"c82379b6-72f2-4474-8714-64f9e6ea7bf7","Type":"ContainerStarted","Data":"7c99dd04de4b52679b5a5ea1cb73dd7278f4f4bc65a88df73cd69461923e071d"} Nov 22 07:35:08 crc kubenswrapper[4853]: I1122 07:35:08.548793 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:35:08 crc kubenswrapper[4853]: I1122 07:35:08.550597 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" Nov 22 07:35:08 crc kubenswrapper[4853]: I1122 07:35:08.566536 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6498cbf48f-cjqxx" podStartSLOduration=51.370077494 podStartE2EDuration="1m1.566520131s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:08.963595403 +0000 UTC m=+1447.804218029" lastFinishedPulling="2025-11-22 07:34:19.16003804 +0000 UTC m=+1458.000660666" observedRunningTime="2025-11-22 07:35:08.565648067 +0000 UTC m=+1507.406270713" watchObservedRunningTime="2025-11-22 07:35:08.566520131 +0000 UTC m=+1507.407142757" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.020402 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.024033 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.038768 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:09 crc kubenswrapper[4853]: E1122 07:35:09.084247 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" podUID="67499981-fc7e-4b6d-ab2b-46b528a165a5" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.121849 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.121991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.122043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xjl6\" (UniqueName: \"kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.223325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.223416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xjl6\" (UniqueName: \"kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.223493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.224097 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.224197 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.260010 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xjl6\" (UniqueName: \"kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6\") pod \"redhat-operators-9ss44\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.352355 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.584013 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" event={"ID":"f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43","Type":"ContainerStarted","Data":"ee0d51b3167a5f4645ece6fadabcb97c3e7f27695077a6e400c36de11c6521b6"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.584096 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.586723 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.587482 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" event={"ID":"67499981-fc7e-4b6d-ab2b-46b528a165a5","Type":"ContainerStarted","Data":"f6866bcce50d6ce01be85e2abce6d0d33cd646710e97c08636ef7d7d86e2cd17"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.590724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" event={"ID":"8774b599-7d20-4c58-9441-821beca48884","Type":"ContainerStarted","Data":"507ab95e33251c6a907f7339dbbf43cd54dc4f0bedd9f4ee97ad9668bb08c23d"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.593809 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" event={"ID":"e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e","Type":"ContainerStarted","Data":"b49ae8d7b58b47795b98647656a3484dd784d33262b465ab234e81196b1a561c"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.594103 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.597503 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" event={"ID":"51d7517d-674b-4d91-bb05-89e11ce77ee8","Type":"ContainerStarted","Data":"2ca75c1c1783c556f2241b25316eb1d86188edca0982248f46ca5f66ca795abe"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.597739 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.600244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" event={"ID":"59095c24-fa32-4f44-b7d0-593b1291cf56","Type":"ContainerStarted","Data":"553deed8396440ad994df19e8d4a28fd7b41cb15351323a9792ac45671a24e3c"} Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.600614 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.600637 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.601256 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.602714 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.609684 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-75fb479bcc-nf2bz" podStartSLOduration=26.646869588 podStartE2EDuration="1m2.609655049s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:08.976270768 +0000 UTC m=+1447.816893394" lastFinishedPulling="2025-11-22 07:34:44.939056219 +0000 UTC m=+1483.779678855" observedRunningTime="2025-11-22 07:35:09.603728857 +0000 UTC m=+1508.444351483" watchObservedRunningTime="2025-11-22 07:35:09.609655049 +0000 UTC m=+1508.450277685" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.634475 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" podStartSLOduration=10.031940687 podStartE2EDuration="1m2.634441901s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.118159495 +0000 UTC m=+1449.958782121" lastFinishedPulling="2025-11-22 07:35:03.720660709 +0000 UTC m=+1502.561283335" observedRunningTime="2025-11-22 07:35:09.626225179 +0000 UTC m=+1508.466847825" watchObservedRunningTime="2025-11-22 07:35:09.634441901 +0000 UTC m=+1508.475064547" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.675297 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-cfbb9c588-gwqp2" podStartSLOduration=13.918632777 podStartE2EDuration="1m2.67527299s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.52116608 +0000 UTC m=+1449.361788706" lastFinishedPulling="2025-11-22 07:34:59.277806293 +0000 UTC m=+1498.118428919" observedRunningTime="2025-11-22 07:35:09.669841292 +0000 UTC m=+1508.510463918" watchObservedRunningTime="2025-11-22 07:35:09.67527299 +0000 UTC m=+1508.515895616" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.710354 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d656998f4-fgtmg" podStartSLOduration=16.695590729 podStartE2EDuration="1m2.710320311s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.093201397 +0000 UTC m=+1449.933824023" lastFinishedPulling="2025-11-22 07:34:57.107930979 +0000 UTC m=+1495.948553605" observedRunningTime="2025-11-22 07:35:09.693482284 +0000 UTC m=+1508.534104910" watchObservedRunningTime="2025-11-22 07:35:09.710320311 +0000 UTC m=+1508.550942937" Nov 22 07:35:09 crc kubenswrapper[4853]: I1122 07:35:09.733463 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" podStartSLOduration=9.815325618 podStartE2EDuration="1m2.733442938s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.107838604 +0000 UTC m=+1449.948461230" lastFinishedPulling="2025-11-22 07:35:04.025955934 +0000 UTC m=+1502.866578550" observedRunningTime="2025-11-22 07:35:09.72427554 +0000 UTC m=+1508.564898166" watchObservedRunningTime="2025-11-22 07:35:09.733442938 +0000 UTC m=+1508.574065564" Nov 22 07:35:09 crc kubenswrapper[4853]: E1122 07:35:09.915599 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" podUID="67499981-fc7e-4b6d-ab2b-46b528a165a5" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.625812 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" event={"ID":"674f240d-b9b1-488a-b6bf-d6231529cf4d","Type":"ContainerStarted","Data":"6c13db0b6bd182d744f54d784c3d54f1856d9844553b978b2872ab20501b6501"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.626522 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.633646 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.636643 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" event={"ID":"6e35d4a2-bb72-4396-83e0-4a9ba4d9274b","Type":"ContainerStarted","Data":"ac436ae92b16a8fc893d5b476edc4ec1a604dc0808ff61638b267f4cd9bee2c4"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.637571 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.640550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" event={"ID":"8774b599-7d20-4c58-9441-821beca48884","Type":"ContainerStarted","Data":"0cb955a0946d0aeeaede036c896d594bfc221a35c803bbb1a345a52782e77cab"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.640919 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.646621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" event={"ID":"9a6ac321-fea5-4011-9112-60695ec2d996","Type":"ContainerStarted","Data":"3bd8cec2cd5a209aad071c968020c037c16031ab1cb5de4937e89ee199385788"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.647045 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.650036 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.654805 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" event={"ID":"511dcee7-13c9-45ca-b12f-3330fb1b14bc","Type":"ContainerStarted","Data":"b6868516dbd50a21375d752afd7329f7e91ba7484f0e59e65ae29ed0388b89ed"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.656019 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.668238 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.669106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" event={"ID":"59095c24-fa32-4f44-b7d0-593b1291cf56","Type":"ContainerStarted","Data":"96eeeb71321e2d5ff1f814f1f2fc42974969cceed4d6b288121f488d23cb8360"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.669674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.681725 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6dd8864d7c-4wjmn" podStartSLOduration=14.661964214 podStartE2EDuration="1m3.681694s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.260656146 +0000 UTC m=+1449.101278772" lastFinishedPulling="2025-11-22 07:34:59.280385932 +0000 UTC m=+1498.121008558" observedRunningTime="2025-11-22 07:35:10.67540151 +0000 UTC m=+1509.516024166" watchObservedRunningTime="2025-11-22 07:35:10.681694 +0000 UTC m=+1509.522316626" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.688522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" event={"ID":"46e379f1-feb8-460a-8448-066bb8f54330","Type":"ContainerStarted","Data":"0a292c9eccd7022138d107686722de81758a3a32de57538c0332400e949078a6"} Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.688568 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:35:10 crc kubenswrapper[4853]: E1122 07:35:10.692988 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" podUID="67499981-fc7e-4b6d-ab2b-46b528a165a5" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.742130 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" podStartSLOduration=3.901222742 podStartE2EDuration="1m3.7421022s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.140392765 +0000 UTC m=+1448.981015391" lastFinishedPulling="2025-11-22 07:35:09.981272223 +0000 UTC m=+1508.821894849" observedRunningTime="2025-11-22 07:35:10.71779476 +0000 UTC m=+1509.558417396" watchObservedRunningTime="2025-11-22 07:35:10.7421022 +0000 UTC m=+1509.582724826" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.849426 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" podStartSLOduration=12.963165033 podStartE2EDuration="1m3.849392281s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:09.766502207 +0000 UTC m=+1448.607124833" lastFinishedPulling="2025-11-22 07:35:00.652729455 +0000 UTC m=+1499.493352081" observedRunningTime="2025-11-22 07:35:10.756031427 +0000 UTC m=+1509.596654063" watchObservedRunningTime="2025-11-22 07:35:10.849392281 +0000 UTC m=+1509.690014907" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.850334 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" podStartSLOduration=4.348986341 podStartE2EDuration="1m3.850325157s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.469743951 +0000 UTC m=+1449.310366577" lastFinishedPulling="2025-11-22 07:35:09.971082767 +0000 UTC m=+1508.811705393" observedRunningTime="2025-11-22 07:35:10.816082407 +0000 UTC m=+1509.656705033" watchObservedRunningTime="2025-11-22 07:35:10.850325157 +0000 UTC m=+1509.690947773" Nov 22 07:35:10 crc kubenswrapper[4853]: I1122 07:35:10.967005 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" podStartSLOduration=13.18662674 podStartE2EDuration="1m3.966960682s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.480400312 +0000 UTC m=+1449.321022938" lastFinishedPulling="2025-11-22 07:35:01.260734254 +0000 UTC m=+1500.101356880" observedRunningTime="2025-11-22 07:35:10.871863571 +0000 UTC m=+1509.712486207" watchObservedRunningTime="2025-11-22 07:35:10.966960682 +0000 UTC m=+1509.807583308" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:10.996895 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-7969689c84-8kl4r" podStartSLOduration=12.334147195 podStartE2EDuration="1m3.996847493s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:09.597990465 +0000 UTC m=+1448.438613091" lastFinishedPulling="2025-11-22 07:35:01.260690763 +0000 UTC m=+1500.101313389" observedRunningTime="2025-11-22 07:35:10.952480539 +0000 UTC m=+1509.793103165" watchObservedRunningTime="2025-11-22 07:35:10.996847493 +0000 UTC m=+1509.837470119" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.171095 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" podStartSLOduration=5.196505395 podStartE2EDuration="1m4.171075381s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.057229488 +0000 UTC m=+1449.897852114" lastFinishedPulling="2025-11-22 07:35:10.031799474 +0000 UTC m=+1508.872422100" observedRunningTime="2025-11-22 07:35:11.17102427 +0000 UTC m=+1510.011646896" watchObservedRunningTime="2025-11-22 07:35:11.171075381 +0000 UTC m=+1510.011698007" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.700598 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" event={"ID":"8a902288-c5fa-4106-89dc-dad1ed8fff47","Type":"ContainerStarted","Data":"8284f93abab90a75292246bcd4394cfdde44042ef601ba32195acb3929d43258"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.702165 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.705174 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerStarted","Data":"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.707414 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" event={"ID":"524f1308-44b0-4603-b612-eb02450cd46d","Type":"ContainerStarted","Data":"a0dbf3521977821924017867a580f9f239f22e14a5bae538e9c26f8cd2f95d2a"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.707915 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.709115 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerStarted","Data":"68dbd8842067872f5e3e15abbcadc8260d1db90a6ee6ad17709b5827c48683c2"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.711469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" event={"ID":"7ed40441-44d2-497f-93e7-d85116790d61","Type":"ContainerStarted","Data":"5a19ff15ef2ce03d52b55bec450bd737c40b857eb28ce1233024584a8c871ae4"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.711955 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.715429 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" event={"ID":"798dacb1-9a2f-4f77-a55e-1f005447a5ec","Type":"ContainerStarted","Data":"6e47c7ad76d69cd386ad1ae1c6959144732995491d41789684a05ee40fc24d43"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.716068 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.719258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" event={"ID":"05971821-7368-4352-8955-bd9432958c9b","Type":"ContainerStarted","Data":"eeda5be7dbe410acedd4480101f6d8e542ace3b6c9f3ca328520d620c5756074"} Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.719286 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.744096 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" podStartSLOduration=3.884483952 podStartE2EDuration="1m4.74407655s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:09.717635318 +0000 UTC m=+1448.558257944" lastFinishedPulling="2025-11-22 07:35:10.577227916 +0000 UTC m=+1509.417850542" observedRunningTime="2025-11-22 07:35:11.735506298 +0000 UTC m=+1510.576128924" watchObservedRunningTime="2025-11-22 07:35:11.74407655 +0000 UTC m=+1510.584699176" Nov 22 07:35:11 crc kubenswrapper[4853]: E1122 07:35:11.756536 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podUID="ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.790576 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" podStartSLOduration=4.020615151 podStartE2EDuration="1m4.790557302s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:09.797394747 +0000 UTC m=+1448.638017373" lastFinishedPulling="2025-11-22 07:35:10.567336898 +0000 UTC m=+1509.407959524" observedRunningTime="2025-11-22 07:35:11.786411719 +0000 UTC m=+1510.627034345" watchObservedRunningTime="2025-11-22 07:35:11.790557302 +0000 UTC m=+1510.631179928" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.872813 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" podStartSLOduration=4.496607262 podStartE2EDuration="1m4.872789643s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.223477834 +0000 UTC m=+1449.064100450" lastFinishedPulling="2025-11-22 07:35:10.599660205 +0000 UTC m=+1509.440282831" observedRunningTime="2025-11-22 07:35:11.863029949 +0000 UTC m=+1510.703652575" watchObservedRunningTime="2025-11-22 07:35:11.872789643 +0000 UTC m=+1510.713412269" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.926323 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" podStartSLOduration=4.469570685 podStartE2EDuration="1m4.926285675s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:10.158342483 +0000 UTC m=+1448.998965109" lastFinishedPulling="2025-11-22 07:35:10.615057473 +0000 UTC m=+1509.455680099" observedRunningTime="2025-11-22 07:35:11.897505515 +0000 UTC m=+1510.738128141" watchObservedRunningTime="2025-11-22 07:35:11.926285675 +0000 UTC m=+1510.766908301" Nov 22 07:35:11 crc kubenswrapper[4853]: I1122 07:35:11.941605 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" podStartSLOduration=3.968690075 podStartE2EDuration="1m4.9415823s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:09.664210926 +0000 UTC m=+1448.504833552" lastFinishedPulling="2025-11-22 07:35:10.637103151 +0000 UTC m=+1509.477725777" observedRunningTime="2025-11-22 07:35:11.935617409 +0000 UTC m=+1510.776240035" watchObservedRunningTime="2025-11-22 07:35:11.9415823 +0000 UTC m=+1510.782204936" Nov 22 07:35:12 crc kubenswrapper[4853]: I1122 07:35:12.730743 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerDied","Data":"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66"} Nov 22 07:35:12 crc kubenswrapper[4853]: I1122 07:35:12.730610 4853 generic.go:334] "Generic (PLEG): container finished" podID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerID="11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66" exitCode=0 Nov 22 07:35:12 crc kubenswrapper[4853]: I1122 07:35:12.736604 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerID="57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384" exitCode=0 Nov 22 07:35:12 crc kubenswrapper[4853]: I1122 07:35:12.740634 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerDied","Data":"57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384"} Nov 22 07:35:14 crc kubenswrapper[4853]: I1122 07:35:14.778522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerStarted","Data":"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a"} Nov 22 07:35:14 crc kubenswrapper[4853]: I1122 07:35:14.789137 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerStarted","Data":"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879"} Nov 22 07:35:14 crc kubenswrapper[4853]: I1122 07:35:14.806326 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrxhq" podStartSLOduration=21.006797632 podStartE2EDuration="24.80630278s" podCreationTimestamp="2025-11-22 07:34:50 +0000 UTC" firstStartedPulling="2025-11-22 07:35:09.915985581 +0000 UTC m=+1508.756608197" lastFinishedPulling="2025-11-22 07:35:13.715490719 +0000 UTC m=+1512.556113345" observedRunningTime="2025-11-22 07:35:14.803693619 +0000 UTC m=+1513.644316255" watchObservedRunningTime="2025-11-22 07:35:14.80630278 +0000 UTC m=+1513.646925406" Nov 22 07:35:16 crc kubenswrapper[4853]: I1122 07:35:16.819439 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerID="c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879" exitCode=0 Nov 22 07:35:16 crc kubenswrapper[4853]: I1122 07:35:16.819526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerDied","Data":"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879"} Nov 22 07:35:17 crc kubenswrapper[4853]: I1122 07:35:17.830304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" event={"ID":"131c2522-8c48-4c18-9a39-99a66b87b9ed","Type":"ContainerStarted","Data":"af424557ab63ae7067b6eee457ca09f0915e8387b39ab2cc21611876473a4da5"} Nov 22 07:35:17 crc kubenswrapper[4853]: I1122 07:35:17.833307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerStarted","Data":"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb"} Nov 22 07:35:17 crc kubenswrapper[4853]: I1122 07:35:17.865391 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-fdt65" podStartSLOduration=3.827583775 podStartE2EDuration="1m9.865374605s" podCreationTimestamp="2025-11-22 07:34:08 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.069137222 +0000 UTC m=+1449.909759848" lastFinishedPulling="2025-11-22 07:35:17.106928052 +0000 UTC m=+1515.947550678" observedRunningTime="2025-11-22 07:35:17.863477763 +0000 UTC m=+1516.704100389" watchObservedRunningTime="2025-11-22 07:35:17.865374605 +0000 UTC m=+1516.705997231" Nov 22 07:35:17 crc kubenswrapper[4853]: I1122 07:35:17.904937 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9ss44" podStartSLOduration=5.361996575 podStartE2EDuration="9.904912857s" podCreationTimestamp="2025-11-22 07:35:08 +0000 UTC" firstStartedPulling="2025-11-22 07:35:12.745001583 +0000 UTC m=+1511.585624229" lastFinishedPulling="2025-11-22 07:35:17.287917885 +0000 UTC m=+1516.128540511" observedRunningTime="2025-11-22 07:35:17.899147941 +0000 UTC m=+1516.739770577" watchObservedRunningTime="2025-11-22 07:35:17.904912857 +0000 UTC m=+1516.745535483" Nov 22 07:35:17 crc kubenswrapper[4853]: I1122 07:35:17.918216 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-767ccfd65f-pfmkd" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.059647 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-99b499f4-km4bs" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.257306 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-598f69df5d-l654j" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.313096 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-56f54d6746-ww42j" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.381517 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7454b96578-h5674" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.419888 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58f887965d-f4bvx" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.486318 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-54b5986bb8-hbqk8" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.545928 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.766927 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-54cfbf4c7d-vsftr" Nov 22 07:35:18 crc kubenswrapper[4853]: I1122 07:35:18.924069 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d" Nov 22 07:35:19 crc kubenswrapper[4853]: I1122 07:35:19.073480 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b797b8dff-5lw59" Nov 22 07:35:19 crc kubenswrapper[4853]: I1122 07:35:19.141434 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b477b5977-7gkdk" Nov 22 07:35:19 crc kubenswrapper[4853]: I1122 07:35:19.172397 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-b4c496f69-wmm95" Nov 22 07:35:19 crc kubenswrapper[4853]: I1122 07:35:19.353234 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:19 crc kubenswrapper[4853]: I1122 07:35:19.353989 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:20 crc kubenswrapper[4853]: I1122 07:35:20.401118 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9ss44" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="registry-server" probeResult="failure" output=< Nov 22 07:35:20 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:35:20 crc kubenswrapper[4853]: > Nov 22 07:35:21 crc kubenswrapper[4853]: I1122 07:35:21.039075 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:21 crc kubenswrapper[4853]: I1122 07:35:21.039152 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:22 crc kubenswrapper[4853]: I1122 07:35:22.100948 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vrxhq" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="registry-server" probeResult="failure" output=< Nov 22 07:35:22 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:35:22 crc kubenswrapper[4853]: > Nov 22 07:35:25 crc kubenswrapper[4853]: I1122 07:35:25.913412 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" event={"ID":"ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4","Type":"ContainerStarted","Data":"adfd3717d8d86c910456db171cbd1857ec8b224193c6974bf1218f0516ee7466"} Nov 22 07:35:25 crc kubenswrapper[4853]: I1122 07:35:25.915478 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:35:25 crc kubenswrapper[4853]: I1122 07:35:25.935088 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" podStartSLOduration=4.830990418 podStartE2EDuration="1m18.935062152s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.117504707 +0000 UTC m=+1449.958127333" lastFinishedPulling="2025-11-22 07:35:25.221576441 +0000 UTC m=+1524.062199067" observedRunningTime="2025-11-22 07:35:25.931708131 +0000 UTC m=+1524.772330757" watchObservedRunningTime="2025-11-22 07:35:25.935062152 +0000 UTC m=+1524.775684788" Nov 22 07:35:26 crc kubenswrapper[4853]: I1122 07:35:26.928524 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" event={"ID":"67499981-fc7e-4b6d-ab2b-46b528a165a5","Type":"ContainerStarted","Data":"13f49ef6829083f8550cfbe27e65b5576813e7057720e3c95beeec2f91328935"} Nov 22 07:35:26 crc kubenswrapper[4853]: I1122 07:35:26.929461 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:35:26 crc kubenswrapper[4853]: I1122 07:35:26.959067 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" podStartSLOduration=4.757913624 podStartE2EDuration="1m19.959044011s" podCreationTimestamp="2025-11-22 07:34:07 +0000 UTC" firstStartedPulling="2025-11-22 07:34:11.070360555 +0000 UTC m=+1449.910983181" lastFinishedPulling="2025-11-22 07:35:26.271490942 +0000 UTC m=+1525.112113568" observedRunningTime="2025-11-22 07:35:26.958279819 +0000 UTC m=+1525.798902455" watchObservedRunningTime="2025-11-22 07:35:26.959044011 +0000 UTC m=+1525.799666627" Nov 22 07:35:29 crc kubenswrapper[4853]: I1122 07:35:29.402645 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:29 crc kubenswrapper[4853]: I1122 07:35:29.459681 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:29 crc kubenswrapper[4853]: I1122 07:35:29.647028 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:30 crc kubenswrapper[4853]: I1122 07:35:30.968555 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9ss44" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="registry-server" containerID="cri-o://21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb" gracePeriod=2 Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.097726 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.213149 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.607427 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.711053 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities\") pod \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.711249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content\") pod \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.711420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xjl6\" (UniqueName: \"kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6\") pod \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\" (UID: \"bfe4a5bb-7e11-46eb-8696-078e64db6a90\") " Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.711988 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities" (OuterVolumeSpecName: "utilities") pod "bfe4a5bb-7e11-46eb-8696-078e64db6a90" (UID: "bfe4a5bb-7e11-46eb-8696-078e64db6a90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.712341 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.719063 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6" (OuterVolumeSpecName: "kube-api-access-9xjl6") pod "bfe4a5bb-7e11-46eb-8696-078e64db6a90" (UID: "bfe4a5bb-7e11-46eb-8696-078e64db6a90"). InnerVolumeSpecName "kube-api-access-9xjl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.803705 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfe4a5bb-7e11-46eb-8696-078e64db6a90" (UID: "bfe4a5bb-7e11-46eb-8696-078e64db6a90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.815236 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfe4a5bb-7e11-46eb-8696-078e64db6a90-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.815286 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xjl6\" (UniqueName: \"kubernetes.io/projected/bfe4a5bb-7e11-46eb-8696-078e64db6a90-kube-api-access-9xjl6\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:31 crc kubenswrapper[4853]: I1122 07:35:31.996568 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerID="21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb" exitCode=0 Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:31.996684 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9ss44" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:31.996704 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerDied","Data":"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb"} Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:31.996796 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9ss44" event={"ID":"bfe4a5bb-7e11-46eb-8696-078e64db6a90","Type":"ContainerDied","Data":"68dbd8842067872f5e3e15abbcadc8260d1db90a6ee6ad17709b5827c48683c2"} Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:31.996822 4853 scope.go:117] "RemoveContainer" containerID="21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.033299 4853 scope.go:117] "RemoveContainer" containerID="c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.037730 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.049494 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9ss44"] Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.056069 4853 scope.go:117] "RemoveContainer" containerID="57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.095361 4853 scope.go:117] "RemoveContainer" containerID="21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb" Nov 22 07:35:32 crc kubenswrapper[4853]: E1122 07:35:32.096078 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb\": container with ID starting with 21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb not found: ID does not exist" containerID="21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.096135 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb"} err="failed to get container status \"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb\": rpc error: code = NotFound desc = could not find container \"21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb\": container with ID starting with 21d8f9bda86db63db21c8e2856b2e30f471edfd08fd737e3e4ba20d9447641bb not found: ID does not exist" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.096195 4853 scope.go:117] "RemoveContainer" containerID="c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879" Nov 22 07:35:32 crc kubenswrapper[4853]: E1122 07:35:32.096679 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879\": container with ID starting with c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879 not found: ID does not exist" containerID="c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.096704 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879"} err="failed to get container status \"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879\": rpc error: code = NotFound desc = could not find container \"c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879\": container with ID starting with c0e9954559b75a1efc0f2a4d3807fb120d6187f4d81ddc79a397b5d356eac879 not found: ID does not exist" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.096726 4853 scope.go:117] "RemoveContainer" containerID="57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384" Nov 22 07:35:32 crc kubenswrapper[4853]: E1122 07:35:32.097365 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384\": container with ID starting with 57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384 not found: ID does not exist" containerID="57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.097391 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384"} err="failed to get container status \"57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384\": rpc error: code = NotFound desc = could not find container \"57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384\": container with ID starting with 57ebe24c123513ff2c1cc09915d27b6752d896d167a9cc17e00db521ad67d384 not found: ID does not exist" Nov 22 07:35:32 crc kubenswrapper[4853]: I1122 07:35:32.447243 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.007051 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrxhq" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="registry-server" containerID="cri-o://0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a" gracePeriod=2 Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.474938 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.557612 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content\") pod \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.557820 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities\") pod \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.557960 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx4h8\" (UniqueName: \"kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8\") pod \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\" (UID: \"e5684cfc-746c-4c02-b8b8-39f0b57d62b6\") " Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.558779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities" (OuterVolumeSpecName: "utilities") pod "e5684cfc-746c-4c02-b8b8-39f0b57d62b6" (UID: "e5684cfc-746c-4c02-b8b8-39f0b57d62b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.566653 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8" (OuterVolumeSpecName: "kube-api-access-mx4h8") pod "e5684cfc-746c-4c02-b8b8-39f0b57d62b6" (UID: "e5684cfc-746c-4c02-b8b8-39f0b57d62b6"). InnerVolumeSpecName "kube-api-access-mx4h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.578305 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5684cfc-746c-4c02-b8b8-39f0b57d62b6" (UID: "e5684cfc-746c-4c02-b8b8-39f0b57d62b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.660182 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx4h8\" (UniqueName: \"kubernetes.io/projected/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-kube-api-access-mx4h8\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.660245 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.660256 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5684cfc-746c-4c02-b8b8-39f0b57d62b6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:35:33 crc kubenswrapper[4853]: I1122 07:35:33.764728 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" path="/var/lib/kubelet/pods/bfe4a5bb-7e11-46eb-8696-078e64db6a90/volumes" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.026596 4853 generic.go:334] "Generic (PLEG): container finished" podID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerID="0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a" exitCode=0 Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.026657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerDied","Data":"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a"} Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.026726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrxhq" event={"ID":"e5684cfc-746c-4c02-b8b8-39f0b57d62b6","Type":"ContainerDied","Data":"04382c3a4f626f3e2d2933670300e6a9d21b6049799ab8e468e2acf299a8c3c6"} Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.026731 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrxhq" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.026790 4853 scope.go:117] "RemoveContainer" containerID="0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.058904 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.062586 4853 scope.go:117] "RemoveContainer" containerID="11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.071796 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrxhq"] Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.090629 4853 scope.go:117] "RemoveContainer" containerID="6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.127354 4853 scope.go:117] "RemoveContainer" containerID="0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a" Nov 22 07:35:34 crc kubenswrapper[4853]: E1122 07:35:34.127985 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a\": container with ID starting with 0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a not found: ID does not exist" containerID="0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.128024 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a"} err="failed to get container status \"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a\": rpc error: code = NotFound desc = could not find container \"0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a\": container with ID starting with 0b206cfa6fd067e71d5ae14053853e03b8d19965d548a4140bff517e36cfc52a not found: ID does not exist" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.128050 4853 scope.go:117] "RemoveContainer" containerID="11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66" Nov 22 07:35:34 crc kubenswrapper[4853]: E1122 07:35:34.128563 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66\": container with ID starting with 11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66 not found: ID does not exist" containerID="11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.128609 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66"} err="failed to get container status \"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66\": rpc error: code = NotFound desc = could not find container \"11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66\": container with ID starting with 11dac6e02ff854d530898b164dad1d9c580c592f1e829414512071c4f63afe66 not found: ID does not exist" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.128623 4853 scope.go:117] "RemoveContainer" containerID="6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3" Nov 22 07:35:34 crc kubenswrapper[4853]: E1122 07:35:34.129103 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3\": container with ID starting with 6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3 not found: ID does not exist" containerID="6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3" Nov 22 07:35:34 crc kubenswrapper[4853]: I1122 07:35:34.129132 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3"} err="failed to get container status \"6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3\": rpc error: code = NotFound desc = could not find container \"6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3\": container with ID starting with 6cf1c2a16fb56b218b4476c00a9e612ad06f52489138cda1f74194f9d7422db3 not found: ID does not exist" Nov 22 07:35:35 crc kubenswrapper[4853]: I1122 07:35:35.776051 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" path="/var/lib/kubelet/pods/e5684cfc-746c-4c02-b8b8-39f0b57d62b6/volumes" Nov 22 07:35:38 crc kubenswrapper[4853]: I1122 07:35:38.867360 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-54fc5f65b7-cm2jj" Nov 22 07:35:39 crc kubenswrapper[4853]: I1122 07:35:39.191633 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-8c6448b9f-fcl7j" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.091657 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.097723 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="extract-content" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098027 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="extract-content" Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.098069 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="extract-utilities" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098078 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="extract-utilities" Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.098108 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="extract-utilities" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098117 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="extract-utilities" Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.098133 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="extract-content" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098143 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="extract-content" Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.098169 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098176 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: E1122 07:35:54.098193 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098201 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098443 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe4a5bb-7e11-46eb-8696-078e64db6a90" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.098491 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5684cfc-746c-4c02-b8b8-39f0b57d62b6" containerName="registry-server" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.107638 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.114496 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lb5tz" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.114867 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.115009 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.115089 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.115143 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.166185 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.175771 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178245 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphn6\" (UniqueName: \"kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178351 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n59s\" (UniqueName: \"kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.178496 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.179905 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.281408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rphn6\" (UniqueName: \"kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.281608 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n59s\" (UniqueName: \"kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.281646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.281692 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.281722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.283420 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.284677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.285369 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.305996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphn6\" (UniqueName: \"kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6\") pod \"dnsmasq-dns-675f4bcbfc-kcbgt\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.307712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n59s\" (UniqueName: \"kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s\") pod \"dnsmasq-dns-78dd6ddcc-j62dc\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.448374 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.502503 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:35:54 crc kubenswrapper[4853]: I1122 07:35:54.973256 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:35:55 crc kubenswrapper[4853]: I1122 07:35:55.104773 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:35:55 crc kubenswrapper[4853]: W1122 07:35:55.110881 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718da1d0_2bf3_40ca_87a5_5e7085c281cd.slice/crio-c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b WatchSource:0}: Error finding container c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b: Status 404 returned error can't find the container with id c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b Nov 22 07:35:55 crc kubenswrapper[4853]: I1122 07:35:55.232583 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" event={"ID":"718da1d0-2bf3-40ca-87a5-5e7085c281cd","Type":"ContainerStarted","Data":"c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b"} Nov 22 07:35:55 crc kubenswrapper[4853]: I1122 07:35:55.234382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" event={"ID":"f4fdb834-8a6e-4b2c-8bda-99753119f475","Type":"ContainerStarted","Data":"3d434d6ae2586f59775c280ba0c333592ad7d0434bf2e8586b4607e727ed884f"} Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.353898 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.388935 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.391687 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.408131 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.475841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.476562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8m4\" (UniqueName: \"kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.476631 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.583179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8m4\" (UniqueName: \"kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.583319 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.583572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.584634 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.584646 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.608423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8m4\" (UniqueName: \"kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4\") pod \"dnsmasq-dns-666b6646f7-xsw8l\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.737560 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.793231 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.821426 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.836709 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.839862 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.894722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.894826 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:57 crc kubenswrapper[4853]: I1122 07:35:57.894867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gff7\" (UniqueName: \"kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.000945 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.001086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gff7\" (UniqueName: \"kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.001575 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.002377 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.002878 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.077165 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gff7\" (UniqueName: \"kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7\") pod \"dnsmasq-dns-57d769cc4f-fwph9\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.283537 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.497268 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.559279 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.562013 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.571674 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.571996 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tjmbv" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.572555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.572708 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.574599 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.574799 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.574890 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.599013 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.729648 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmpc\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.729760 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.729809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.729980 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730049 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730114 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730131 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.730152 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840116 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840271 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840404 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840452 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840675 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840718 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840796 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.840857 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.841036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qmpc\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.841255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.842419 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.843210 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.843940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.845736 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.847687 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.847717 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.856150 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.857576 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.873111 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.913064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qmpc\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.954044 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.969852 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.974087 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981326 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gz4cf" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981451 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981581 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981728 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981835 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.981988 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.982035 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.991820 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:35:58 crc kubenswrapper[4853]: I1122 07:35:58.992868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " pod="openstack/rabbitmq-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149200 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149299 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149323 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrrz9\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149372 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149414 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149482 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149502 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.149571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.222207 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255417 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255525 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255579 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255645 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255764 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrrz9\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255794 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255849 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255868 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.255898 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.256082 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.281037 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.292206 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.292566 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.310973 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.312691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.341968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.368685 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" event={"ID":"89c6d393-491f-477d-8d77-5a14ae67ed3b","Type":"ContainerStarted","Data":"868b1f2e310d7df86023f3c7b7fae5e58b3cfb278fec1ff333a668a684f45713"} Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.390091 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" event={"ID":"3cb42f37-eef6-4874-acad-7bcf2dd29078","Type":"ContainerStarted","Data":"d988e33ace2e18f599b985de5d600df9a3c34c4807553304bef591ce18763b59"} Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.392696 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.398540 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.398693 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrrz9\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.402654 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.414263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.581267 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.962912 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tv8h9"] Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.968429 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.981898 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-utilities\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.982088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-catalog-content\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:35:59 crc kubenswrapper[4853]: I1122 07:35:59.982171 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhtj\" (UniqueName: \"kubernetes.io/projected/cae818e5-34d5-43c7-95af-e82e21309758-kube-api-access-qbhtj\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.012875 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tv8h9"] Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.083991 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.085535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-utilities\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.085812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-catalog-content\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.085923 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbhtj\" (UniqueName: \"kubernetes.io/projected/cae818e5-34d5-43c7-95af-e82e21309758-kube-api-access-qbhtj\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.086331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-utilities\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.086612 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae818e5-34d5-43c7-95af-e82e21309758-catalog-content\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.131836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbhtj\" (UniqueName: \"kubernetes.io/projected/cae818e5-34d5-43c7-95af-e82e21309758-kube-api-access-qbhtj\") pod \"community-operators-tv8h9\" (UID: \"cae818e5-34d5-43c7-95af-e82e21309758\") " pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: W1122 07:36:00.258913 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0e9072b_3e2a_4283_a697_8411049c5161.slice/crio-c893e7dea54c22ee2e3e927dddb2d5c817aa9a13cfb1fe06046ef8056969f7c3 WatchSource:0}: Error finding container c893e7dea54c22ee2e3e927dddb2d5c817aa9a13cfb1fe06046ef8056969f7c3: Status 404 returned error can't find the container with id c893e7dea54c22ee2e3e927dddb2d5c817aa9a13cfb1fe06046ef8056969f7c3 Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.261591 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.313483 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.404467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerStarted","Data":"42d1f780e47b048f344df7fb59498fd07ad6fa0b397ff050b0167cc142292cd1"} Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.406439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerStarted","Data":"c893e7dea54c22ee2e3e927dddb2d5c817aa9a13cfb1fe06046ef8056969f7c3"} Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.513546 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.517376 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.530005 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.531292 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.531734 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.531895 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5ktqd" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.544250 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.570213 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707613 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hldd\" (UniqueName: \"kubernetes.io/projected/410e418b-aee9-40c9-96ed-0f8c5c882148-kube-api-access-5hldd\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707852 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-kolla-config\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707895 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-operator-scripts\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707923 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707952 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707972 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.707998 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-generated\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.708202 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-default\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.811169 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-default\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.811239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hldd\" (UniqueName: \"kubernetes.io/projected/410e418b-aee9-40c9-96ed-0f8c5c882148-kube-api-access-5hldd\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812451 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-kolla-config\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-operator-scripts\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812530 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812554 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.812612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-generated\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.815487 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.816579 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-operator-scripts\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.817829 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-generated\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.818871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-config-data-default\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.818877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/410e418b-aee9-40c9-96ed-0f8c5c882148-kolla-config\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.826828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.852845 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hldd\" (UniqueName: \"kubernetes.io/projected/410e418b-aee9-40c9-96ed-0f8c5c882148-kube-api-access-5hldd\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.856483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410e418b-aee9-40c9-96ed-0f8c5c882148-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:00 crc kubenswrapper[4853]: I1122 07:36:00.910880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"410e418b-aee9-40c9-96ed-0f8c5c882148\") " pod="openstack/openstack-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.159332 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.664279 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.668862 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.672000 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ttqlz" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.673920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.674124 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.674254 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.690024 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.837713 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.837817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.837843 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4lr\" (UniqueName: \"kubernetes.io/projected/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kube-api-access-vj4lr\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.837873 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.837899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.838020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.838086 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.838113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.876107 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.887959 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.895246 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7dgcp" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.895719 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.895905 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.934911 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.940730 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.940815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4lr\" (UniqueName: \"kubernetes.io/projected/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kube-api-access-vj4lr\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.940869 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.940908 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.941016 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.941087 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.941125 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.941200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.942220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.943009 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.943471 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.944203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.948843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.957553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:01 crc kubenswrapper[4853]: I1122 07:36:01.978719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.016932 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4lr\" (UniqueName: \"kubernetes.io/projected/fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d-kube-api-access-vj4lr\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.044871 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.045066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.045109 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-config-data\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.045152 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvhcn\" (UniqueName: \"kubernetes.io/projected/b64e6703-1b51-477a-8898-3646dbf7b00c-kube-api-access-lvhcn\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.045312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-kolla-config\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.066080 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.147888 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.148014 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.148054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-config-data\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.148099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvhcn\" (UniqueName: \"kubernetes.io/projected/b64e6703-1b51-477a-8898-3646dbf7b00c-kube-api-access-lvhcn\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.148201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-kolla-config\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.152155 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-config-data\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.153540 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b64e6703-1b51-477a-8898-3646dbf7b00c-kolla-config\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.155101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.171879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvhcn\" (UniqueName: \"kubernetes.io/projected/b64e6703-1b51-477a-8898-3646dbf7b00c-kube-api-access-lvhcn\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.172741 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64e6703-1b51-477a-8898-3646dbf7b00c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"b64e6703-1b51-477a-8898-3646dbf7b00c\") " pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.235510 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.339877 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.427780 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.549148 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tv8h9"] Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.782316 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:36:02 crc kubenswrapper[4853]: W1122 07:36:02.791368 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb64e6703_1b51_477a_8898_3646dbf7b00c.slice/crio-48d3eef902e9bb08e985886157b26c038cd4dd0bf152853ade8dc751fe41b486 WatchSource:0}: Error finding container 48d3eef902e9bb08e985886157b26c038cd4dd0bf152853ade8dc751fe41b486: Status 404 returned error can't find the container with id 48d3eef902e9bb08e985886157b26c038cd4dd0bf152853ade8dc751fe41b486 Nov 22 07:36:02 crc kubenswrapper[4853]: I1122 07:36:02.919548 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:36:02 crc kubenswrapper[4853]: W1122 07:36:02.924393 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5f90cd_e8e9_489e_b7fd_fde9fd9c342d.slice/crio-0ff61312c84e7119fc1aed4f44b48b0f45bb760b3677a2cae0af26d55c85842d WatchSource:0}: Error finding container 0ff61312c84e7119fc1aed4f44b48b0f45bb760b3677a2cae0af26d55c85842d: Status 404 returned error can't find the container with id 0ff61312c84e7119fc1aed4f44b48b0f45bb760b3677a2cae0af26d55c85842d Nov 22 07:36:03 crc kubenswrapper[4853]: I1122 07:36:03.445855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b64e6703-1b51-477a-8898-3646dbf7b00c","Type":"ContainerStarted","Data":"48d3eef902e9bb08e985886157b26c038cd4dd0bf152853ade8dc751fe41b486"} Nov 22 07:36:03 crc kubenswrapper[4853]: I1122 07:36:03.448022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d","Type":"ContainerStarted","Data":"0ff61312c84e7119fc1aed4f44b48b0f45bb760b3677a2cae0af26d55c85842d"} Nov 22 07:36:03 crc kubenswrapper[4853]: I1122 07:36:03.452082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"410e418b-aee9-40c9-96ed-0f8c5c882148","Type":"ContainerStarted","Data":"8dc6a56adc591399dedb3f76fdc6f508bf6252548f008737a8dfb676b3c67395"} Nov 22 07:36:03 crc kubenswrapper[4853]: I1122 07:36:03.463078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv8h9" event={"ID":"cae818e5-34d5-43c7-95af-e82e21309758","Type":"ContainerStarted","Data":"d96174f0af97c777153708f89e2e7ee28452517be600c34b914a3cb8a7b3be93"} Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.001439 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.006510 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.025105 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hq2xq" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.060088 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.108243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlcz\" (UniqueName: \"kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz\") pod \"kube-state-metrics-0\" (UID: \"573160d1-5593-42ee-906a-44b4fbc5abe4\") " pod="openstack/kube-state-metrics-0" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.211536 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmlcz\" (UniqueName: \"kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz\") pod \"kube-state-metrics-0\" (UID: \"573160d1-5593-42ee-906a-44b4fbc5abe4\") " pod="openstack/kube-state-metrics-0" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.265510 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmlcz\" (UniqueName: \"kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz\") pod \"kube-state-metrics-0\" (UID: \"573160d1-5593-42ee-906a-44b4fbc5abe4\") " pod="openstack/kube-state-metrics-0" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.383234 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.852032 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj"] Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.856578 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.866690 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-tz5sl" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.866970 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.887929 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj"] Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.958661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:04 crc kubenswrapper[4853]: I1122 07:36:04.958800 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdz2\" (UniqueName: \"kubernetes.io/projected/1dcce692-834e-48e2-bcfd-7c0f05480fb4-kube-api-access-5tdz2\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.062897 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.063015 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdz2\" (UniqueName: \"kubernetes.io/projected/1dcce692-834e-48e2-bcfd-7c0f05480fb4-kube-api-access-5tdz2\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: E1122 07:36:05.063591 4853 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Nov 22 07:36:05 crc kubenswrapper[4853]: E1122 07:36:05.063659 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert podName:1dcce692-834e-48e2-bcfd-7c0f05480fb4 nodeName:}" failed. No retries permitted until 2025-11-22 07:36:05.563634806 +0000 UTC m=+1564.404257432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert") pod "observability-ui-dashboards-7d5fb4cbfb-jlcqj" (UID: "1dcce692-834e-48e2-bcfd-7c0f05480fb4") : secret "observability-ui-dashboards" not found Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.121145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdz2\" (UniqueName: \"kubernetes.io/projected/1dcce692-834e-48e2-bcfd-7c0f05480fb4-kube-api-access-5tdz2\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.266502 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.284960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.291350 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xlqg2" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.291613 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.300243 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.300712 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.301811 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.330102 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.355389 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371389 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371513 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371559 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clglx\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371626 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371676 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.371757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.407660 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.460708 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-859d4ccd9f-mfkwx"] Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.463375 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485401 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485521 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485601 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clglx\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485662 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.485879 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.486022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: E1122 07:36:05.486818 4853 configmap.go:193] Couldn't get configMap openstack/prometheus-metric-storage-rulefiles-0: configmap "prometheus-metric-storage-rulefiles-0" not found Nov 22 07:36:05 crc kubenswrapper[4853]: E1122 07:36:05.486996 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0 podName:21a745c3-d66b-447a-bf7e-386ac88bb05f nodeName:}" failed. No retries permitted until 2025-11-22 07:36:05.98690354 +0000 UTC m=+1564.827526156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-metric-storage-rulefiles-0" (UniqueName: "kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0") pod "prometheus-metric-storage-0" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f") : configmap "prometheus-metric-storage-rulefiles-0" not found Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.503081 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.504446 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.519328 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.523689 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859d4ccd9f-mfkwx"] Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.529961 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.530392 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.534330 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.534380 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b9452f72c82fc383fb7f41be861bef3909a820d64fcd2aeadb4aba00c38cb08/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.546852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clglx\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.595160 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-oauth-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.595467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-trusted-ca-bundle\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.595594 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.595760 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhxq\" (UniqueName: \"kubernetes.io/projected/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-kube-api-access-2qhxq\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.595840 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-service-ca\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.596247 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.596361 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-oauth-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.596520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.602387 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dcce692-834e-48e2-bcfd-7c0f05480fb4-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-jlcqj\" (UID: \"1dcce692-834e-48e2-bcfd-7c0f05480fb4\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.641680 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699125 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-trusted-ca-bundle\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699245 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhxq\" (UniqueName: \"kubernetes.io/projected/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-kube-api-access-2qhxq\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699272 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-service-ca\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699326 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699353 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-oauth-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.699416 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-oauth-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.702925 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-oauth-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.703727 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.703808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-trusted-ca-bundle\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.704721 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-service-ca\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.705074 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-serving-cert\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.706308 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-console-oauth-config\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.725617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhxq\" (UniqueName: \"kubernetes.io/projected/9ef15139-fdad-4e4c-a3bf-e1050c5bf716-kube-api-access-2qhxq\") pod \"console-859d4ccd9f-mfkwx\" (UID: \"9ef15139-fdad-4e4c-a3bf-e1050c5bf716\") " pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.840406 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" Nov 22 07:36:05 crc kubenswrapper[4853]: I1122 07:36:05.958130 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:06 crc kubenswrapper[4853]: I1122 07:36:06.010254 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:06 crc kubenswrapper[4853]: I1122 07:36:06.013254 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:06 crc kubenswrapper[4853]: I1122 07:36:06.262067 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:36:06 crc kubenswrapper[4853]: I1122 07:36:06.560789 4853 generic.go:334] "Generic (PLEG): container finished" podID="cae818e5-34d5-43c7-95af-e82e21309758" containerID="2dab1c284dc4f04d22d206f5a309f6e67d04e3a3ab55a50f66d5329496cf1c04" exitCode=0 Nov 22 07:36:06 crc kubenswrapper[4853]: I1122 07:36:06.560866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv8h9" event={"ID":"cae818e5-34d5-43c7-95af-e82e21309758","Type":"ContainerDied","Data":"2dab1c284dc4f04d22d206f5a309f6e67d04e3a3ab55a50f66d5329496cf1c04"} Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.424451 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.427118 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.441490 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.442839 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.443018 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.443114 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-64vqm" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.443123 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.443029 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.539793 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nhs2x"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.543276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.547029 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.547651 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jzb7p" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.549060 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.556478 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k99wz"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.560190 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.567831 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nhs2x"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.599525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.599614 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.599664 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.599737 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-245m2\" (UniqueName: \"kubernetes.io/projected/ef57a60a-7a73-45c6-8760-7e215eedd374-kube-api-access-245m2\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.599832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.600315 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.600470 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.600552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.616446 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k99wz"] Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-lib\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703608 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99gf\" (UniqueName: \"kubernetes.io/projected/e573b0f6-8f5e-45a9-b00e-410826a9a36d-kube-api-access-l99gf\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703693 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-log\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6z67\" (UniqueName: \"kubernetes.io/projected/05c9113f-59ff-46cc-b704-eb9c8553ad37-kube-api-access-d6z67\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703788 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-ovn-controller-tls-certs\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703809 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-run\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703899 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703931 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.703977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-245m2\" (UniqueName: \"kubernetes.io/projected/ef57a60a-7a73-45c6-8760-7e215eedd374-kube-api-access-245m2\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704018 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e573b0f6-8f5e-45a9-b00e-410826a9a36d-scripts\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704129 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-etc-ovs\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704152 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-combined-ca-bundle\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704230 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704255 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-log-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.704282 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05c9113f-59ff-46cc-b704-eb9c8553ad37-scripts\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.706076 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.706257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.707500 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-config\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.709237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef57a60a-7a73-45c6-8760-7e215eedd374-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.713139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.713868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.726108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef57a60a-7a73-45c6-8760-7e215eedd374-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.726619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-245m2\" (UniqueName: \"kubernetes.io/projected/ef57a60a-7a73-45c6-8760-7e215eedd374-kube-api-access-245m2\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.743261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ef57a60a-7a73-45c6-8760-7e215eedd374\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.758757 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.806996 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807522 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-log-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807554 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05c9113f-59ff-46cc-b704-eb9c8553ad37-scripts\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807614 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-lib\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807661 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99gf\" (UniqueName: \"kubernetes.io/projected/e573b0f6-8f5e-45a9-b00e-410826a9a36d-kube-api-access-l99gf\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807710 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-log\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807735 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6z67\" (UniqueName: \"kubernetes.io/projected/05c9113f-59ff-46cc-b704-eb9c8553ad37-kube-api-access-d6z67\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807791 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-ovn-controller-tls-certs\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807839 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-run\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.807990 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.808052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e573b0f6-8f5e-45a9-b00e-410826a9a36d-scripts\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.808089 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-etc-ovs\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.808120 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-combined-ca-bundle\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.811451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-log\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.811532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-lib\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.811716 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-log-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.811938 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.812434 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/05c9113f-59ff-46cc-b704-eb9c8553ad37-var-run-ovn\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.812489 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-var-run\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.812619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e573b0f6-8f5e-45a9-b00e-410826a9a36d-etc-ovs\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.815708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e573b0f6-8f5e-45a9-b00e-410826a9a36d-scripts\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.815714 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05c9113f-59ff-46cc-b704-eb9c8553ad37-scripts\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.817484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-combined-ca-bundle\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.819639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/05c9113f-59ff-46cc-b704-eb9c8553ad37-ovn-controller-tls-certs\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.830692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6z67\" (UniqueName: \"kubernetes.io/projected/05c9113f-59ff-46cc-b704-eb9c8553ad37-kube-api-access-d6z67\") pod \"ovn-controller-nhs2x\" (UID: \"05c9113f-59ff-46cc-b704-eb9c8553ad37\") " pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.830757 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99gf\" (UniqueName: \"kubernetes.io/projected/e573b0f6-8f5e-45a9-b00e-410826a9a36d-kube-api-access-l99gf\") pod \"ovn-controller-ovs-k99wz\" (UID: \"e573b0f6-8f5e-45a9-b00e-410826a9a36d\") " pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.911058 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x" Nov 22 07:36:08 crc kubenswrapper[4853]: I1122 07:36:08.924359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.470425 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.473265 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.478298 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.483400 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.483437 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.483587 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9pkqb" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.494405 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582535 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582757 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582909 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4f9t\" (UniqueName: \"kubernetes.io/projected/a9dc9521-7d6a-4622-9a63-9c761ff0721c-kube-api-access-x4f9t\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582959 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.582995 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.583021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.583066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686274 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686339 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686397 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4f9t\" (UniqueName: \"kubernetes.io/projected/a9dc9521-7d6a-4622-9a63-9c761ff0721c-kube-api-access-x4f9t\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686437 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686483 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.686647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.688108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-config\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.688323 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.688435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9dc9521-7d6a-4622-9a63-9c761ff0721c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.688813 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.695082 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.695429 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.704962 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9521-7d6a-4622-9a63-9c761ff0721c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.707712 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4f9t\" (UniqueName: \"kubernetes.io/projected/a9dc9521-7d6a-4622-9a63-9c761ff0721c-kube-api-access-x4f9t\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.718713 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"a9dc9521-7d6a-4622-9a63-9c761ff0721c\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:11 crc kubenswrapper[4853]: I1122 07:36:11.796728 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:36:12 crc kubenswrapper[4853]: W1122 07:36:12.466458 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod573160d1_5593_42ee_906a_44b4fbc5abe4.slice/crio-3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7 WatchSource:0}: Error finding container 3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7: Status 404 returned error can't find the container with id 3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7 Nov 22 07:36:12 crc kubenswrapper[4853]: I1122 07:36:12.647783 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"573160d1-5593-42ee-906a-44b4fbc5abe4","Type":"ContainerStarted","Data":"3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7"} Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.356132 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.357346 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gff7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-fwph9_openstack(3cb42f37-eef6-4874-acad-7bcf2dd29078): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.359516 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.396946 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.397160 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4h8m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xsw8l_openstack(89c6d393-491f-477d-8d77-5a14ae67ed3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:43 crc kubenswrapper[4853]: E1122 07:36:43.399199 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" Nov 22 07:36:44 crc kubenswrapper[4853]: E1122 07:36:44.026039 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" Nov 22 07:36:44 crc kubenswrapper[4853]: E1122 07:36:44.026142 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" Nov 22 07:36:45 crc kubenswrapper[4853]: E1122 07:36:45.818636 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 07:36:45 crc kubenswrapper[4853]: E1122 07:36:45.819475 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rphn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kcbgt_openstack(f4fdb834-8a6e-4b2c-8bda-99753119f475): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:45 crc kubenswrapper[4853]: E1122 07:36:45.821013 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" podUID="f4fdb834-8a6e-4b2c-8bda-99753119f475" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.222473 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.223146 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qmpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2eadd806-7143-46ba-9e49-f19ac0bd52bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.224442 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.260533 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.260782 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n59s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-j62dc_openstack(718da1d0-2bf3-40ca-87a5-5e7085c281cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.261965 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" podUID="718da1d0-2bf3-40ca-87a5-5e7085c281cd" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.938517 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.938734 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n549h659h654hf6h589h5c4h675h646hddhcchd9h596h574h7bh569hf9hcch66hfdh97h5b9h6bh657h649h574h6hbfh55fh665hc4h64bh5c5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lvhcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(b64e6703-1b51-477a-8898-3646dbf7b00c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:47 crc kubenswrapper[4853]: E1122 07:36:47.939879 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="b64e6703-1b51-477a-8898-3646dbf7b00c" Nov 22 07:36:48 crc kubenswrapper[4853]: E1122 07:36:48.068533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="b64e6703-1b51-477a-8898-3646dbf7b00c" Nov 22 07:36:48 crc kubenswrapper[4853]: E1122 07:36:48.069536 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.813609 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.814623 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrrz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d0e9072b-3e2a-4283-a697-8411049c5161): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.815792 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.822069 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.823936 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hldd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(410e418b-aee9-40c9-96ed-0f8c5c882148): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.825187 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="410e418b-aee9-40c9-96ed-0f8c5c882148" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.826039 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.826212 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj4lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:36:49 crc kubenswrapper[4853]: E1122 07:36:49.827599 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d" Nov 22 07:36:49 crc kubenswrapper[4853]: I1122 07:36:49.965358 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.011429 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rphn6\" (UniqueName: \"kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6\") pod \"f4fdb834-8a6e-4b2c-8bda-99753119f475\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.011518 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config\") pod \"f4fdb834-8a6e-4b2c-8bda-99753119f475\" (UID: \"f4fdb834-8a6e-4b2c-8bda-99753119f475\") " Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.025912 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config" (OuterVolumeSpecName: "config") pod "f4fdb834-8a6e-4b2c-8bda-99753119f475" (UID: "f4fdb834-8a6e-4b2c-8bda-99753119f475"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.043963 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6" (OuterVolumeSpecName: "kube-api-access-rphn6") pod "f4fdb834-8a6e-4b2c-8bda-99753119f475" (UID: "f4fdb834-8a6e-4b2c-8bda-99753119f475"). InnerVolumeSpecName "kube-api-access-rphn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.094172 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" event={"ID":"f4fdb834-8a6e-4b2c-8bda-99753119f475","Type":"ContainerDied","Data":"3d434d6ae2586f59775c280ba0c333592ad7d0434bf2e8586b4607e727ed884f"} Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.094338 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kcbgt" Nov 22 07:36:50 crc kubenswrapper[4853]: E1122 07:36:50.099216 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="410e418b-aee9-40c9-96ed-0f8c5c882148" Nov 22 07:36:50 crc kubenswrapper[4853]: E1122 07:36:50.103463 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d" Nov 22 07:36:50 crc kubenswrapper[4853]: E1122 07:36:50.103597 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.115681 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rphn6\" (UniqueName: \"kubernetes.io/projected/f4fdb834-8a6e-4b2c-8bda-99753119f475-kube-api-access-rphn6\") on node \"crc\" DevicePath \"\"" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.115765 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fdb834-8a6e-4b2c-8bda-99753119f475-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.228056 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.238988 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kcbgt"] Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.346907 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj"] Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.373861 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:36:50 crc kubenswrapper[4853]: I1122 07:36:50.558464 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nhs2x"] Nov 22 07:36:51 crc kubenswrapper[4853]: I1122 07:36:51.762213 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fdb834-8a6e-4b2c-8bda-99753119f475" path="/var/lib/kubelet/pods/f4fdb834-8a6e-4b2c-8bda-99753119f475/volumes" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.126265 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" event={"ID":"718da1d0-2bf3-40ca-87a5-5e7085c281cd","Type":"ContainerDied","Data":"c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b"} Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.126326 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b5b460d4d88ff43a97465e433b8e8616115db9c612cb23ef136e303bd4a42b" Nov 22 07:36:52 crc kubenswrapper[4853]: W1122 07:36:52.150108 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05c9113f_59ff_46cc_b704_eb9c8553ad37.slice/crio-ef75a014a1470b3d287b346f6997166251cb03ab0db5df670927b9263282143d WatchSource:0}: Error finding container ef75a014a1470b3d287b346f6997166251cb03ab0db5df670927b9263282143d: Status 404 returned error can't find the container with id ef75a014a1470b3d287b346f6997166251cb03ab0db5df670927b9263282143d Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.246810 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.378364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n59s\" (UniqueName: \"kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s\") pod \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.378584 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config\") pod \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.379150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config" (OuterVolumeSpecName: "config") pod "718da1d0-2bf3-40ca-87a5-5e7085c281cd" (UID: "718da1d0-2bf3-40ca-87a5-5e7085c281cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.379222 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc\") pod \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\" (UID: \"718da1d0-2bf3-40ca-87a5-5e7085c281cd\") " Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.379617 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "718da1d0-2bf3-40ca-87a5-5e7085c281cd" (UID: "718da1d0-2bf3-40ca-87a5-5e7085c281cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.380299 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.380321 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/718da1d0-2bf3-40ca-87a5-5e7085c281cd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.384698 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s" (OuterVolumeSpecName: "kube-api-access-8n59s") pod "718da1d0-2bf3-40ca-87a5-5e7085c281cd" (UID: "718da1d0-2bf3-40ca-87a5-5e7085c281cd"). InnerVolumeSpecName "kube-api-access-8n59s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.483350 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n59s\" (UniqueName: \"kubernetes.io/projected/718da1d0-2bf3-40ca-87a5-5e7085c281cd-kube-api-access-8n59s\") on node \"crc\" DevicePath \"\"" Nov 22 07:36:52 crc kubenswrapper[4853]: I1122 07:36:52.625462 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859d4ccd9f-mfkwx"] Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.136506 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerStarted","Data":"12bb5a88e209af6ba4cdd62a5959708ea9e0b6d437c0df3aeb8f4fa8ae1c3898"} Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.137465 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" event={"ID":"1dcce692-834e-48e2-bcfd-7c0f05480fb4","Type":"ContainerStarted","Data":"6d9b9fe46614d84d6d02ba0adeb757a5b9d6dc103b813eaa219f8d3a6195ae70"} Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.138492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nhs2x" event={"ID":"05c9113f-59ff-46cc-b704-eb9c8553ad37","Type":"ContainerStarted","Data":"ef75a014a1470b3d287b346f6997166251cb03ab0db5df670927b9263282143d"} Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.138515 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-j62dc" Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.200640 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.210538 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-j62dc"] Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.791150 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718da1d0-2bf3-40ca-87a5-5e7085c281cd" path="/var/lib/kubelet/pods/718da1d0-2bf3-40ca-87a5-5e7085c281cd/volumes" Nov 22 07:36:53 crc kubenswrapper[4853]: I1122 07:36:53.837406 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:36:54 crc kubenswrapper[4853]: W1122 07:36:54.113250 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9dc9521_7d6a_4622_9a63_9c761ff0721c.slice/crio-9af6304dce797015d3c18d0443237288e4e923037b2ac8cf7b94b0fd4f010cf2 WatchSource:0}: Error finding container 9af6304dce797015d3c18d0443237288e4e923037b2ac8cf7b94b0fd4f010cf2: Status 404 returned error can't find the container with id 9af6304dce797015d3c18d0443237288e4e923037b2ac8cf7b94b0fd4f010cf2 Nov 22 07:36:54 crc kubenswrapper[4853]: I1122 07:36:54.164105 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d4ccd9f-mfkwx" event={"ID":"9ef15139-fdad-4e4c-a3bf-e1050c5bf716","Type":"ContainerStarted","Data":"9b2c092f6b1fedea7de9d1368b8a42f871ddc22d2ff6411f37b3d2fdec539721"} Nov 22 07:36:54 crc kubenswrapper[4853]: I1122 07:36:54.168194 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9dc9521-7d6a-4622-9a63-9c761ff0721c","Type":"ContainerStarted","Data":"9af6304dce797015d3c18d0443237288e4e923037b2ac8cf7b94b0fd4f010cf2"} Nov 22 07:36:54 crc kubenswrapper[4853]: I1122 07:36:54.438432 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k99wz"] Nov 22 07:36:54 crc kubenswrapper[4853]: I1122 07:36:54.624243 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:36:54 crc kubenswrapper[4853]: W1122 07:36:54.820222 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode573b0f6_8f5e_45a9_b00e_410826a9a36d.slice/crio-783e0a3211d8815b3c2181f6ba66a69be7dca66ba0c81f3da1c591bc4fec3cf1 WatchSource:0}: Error finding container 783e0a3211d8815b3c2181f6ba66a69be7dca66ba0c81f3da1c591bc4fec3cf1: Status 404 returned error can't find the container with id 783e0a3211d8815b3c2181f6ba66a69be7dca66ba0c81f3da1c591bc4fec3cf1 Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.180608 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k99wz" event={"ID":"e573b0f6-8f5e-45a9-b00e-410826a9a36d","Type":"ContainerStarted","Data":"783e0a3211d8815b3c2181f6ba66a69be7dca66ba0c81f3da1c591bc4fec3cf1"} Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.184435 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d4ccd9f-mfkwx" event={"ID":"9ef15139-fdad-4e4c-a3bf-e1050c5bf716","Type":"ContainerStarted","Data":"5175cf9c836352f97332a2c0c6db07457d64d6ceafeca1b01793f0c6de4f5982"} Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.211892 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-859d4ccd9f-mfkwx" podStartSLOduration=50.211863838 podStartE2EDuration="50.211863838s" podCreationTimestamp="2025-11-22 07:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:36:55.207333627 +0000 UTC m=+1614.047956263" watchObservedRunningTime="2025-11-22 07:36:55.211863838 +0000 UTC m=+1614.052486464" Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.958780 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.958885 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:55 crc kubenswrapper[4853]: I1122 07:36:55.974652 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:56 crc kubenswrapper[4853]: I1122 07:36:56.198311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef57a60a-7a73-45c6-8760-7e215eedd374","Type":"ContainerStarted","Data":"701c0e44a2db33f7cbd7b5456b23f5ea456b866372e36a2fd2f3bbe38752cb7b"} Nov 22 07:36:56 crc kubenswrapper[4853]: I1122 07:36:56.204187 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 07:36:56 crc kubenswrapper[4853]: I1122 07:36:56.285138 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:37:05 crc kubenswrapper[4853]: E1122 07:37:05.028714 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Nov 22 07:37:05 crc kubenswrapper[4853]: E1122 07:37:05.029781 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644hdfh556h69h666h59ch5cfh97h5h54dh65fh5d9h59h5c4h7dh55fh65dhbhc7h5dh59fh5cch5c9h56bh68chbhc8h75hc6h565h566h654q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6z67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-nhs2x_openstack(05c9113f-59ff-46cc-b704-eb9c8553ad37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:05 crc kubenswrapper[4853]: E1122 07:37:05.030984 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-nhs2x" podUID="05c9113f-59ff-46cc-b704-eb9c8553ad37" Nov 22 07:37:05 crc kubenswrapper[4853]: E1122 07:37:05.291866 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-nhs2x" podUID="05c9113f-59ff-46cc-b704-eb9c8553ad37" Nov 22 07:37:16 crc kubenswrapper[4853]: E1122 07:37:16.832966 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:37:16 crc kubenswrapper[4853]: E1122 07:37:16.834236 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qmpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2eadd806-7143-46ba-9e49-f19ac0bd52bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:16 crc kubenswrapper[4853]: E1122 07:37:16.836029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:37:19 crc kubenswrapper[4853]: E1122 07:37:19.681661 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 07:37:19 crc kubenswrapper[4853]: E1122 07:37:19.682256 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 07:37:19 crc kubenswrapper[4853]: E1122 07:37:19.682477 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmlcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(573160d1-5593-42ee-906a-44b4fbc5abe4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:37:19 crc kubenswrapper[4853]: E1122 07:37:19.683796 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" Nov 22 07:37:20 crc kubenswrapper[4853]: I1122 07:37:20.452155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv8h9" event={"ID":"cae818e5-34d5-43c7-95af-e82e21309758","Type":"ContainerStarted","Data":"ad8af03f275a3e1cd5730d3368af19fee07a74b7b0641ae99544eefd7b3f9fc2"} Nov 22 07:37:21 crc kubenswrapper[4853]: I1122 07:37:21.363239 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-c5f4cf575-47s4q" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" containerID="cri-o://ce34071f8bfdc0c83adea546339db47ef1dc168ff80bdb05db3fb5acc9181e0a" gracePeriod=15 Nov 22 07:37:21 crc kubenswrapper[4853]: I1122 07:37:21.465005 4853 generic.go:334] "Generic (PLEG): container finished" podID="cae818e5-34d5-43c7-95af-e82e21309758" containerID="ad8af03f275a3e1cd5730d3368af19fee07a74b7b0641ae99544eefd7b3f9fc2" exitCode=0 Nov 22 07:37:21 crc kubenswrapper[4853]: I1122 07:37:21.465161 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv8h9" event={"ID":"cae818e5-34d5-43c7-95af-e82e21309758","Type":"ContainerDied","Data":"ad8af03f275a3e1cd5730d3368af19fee07a74b7b0641ae99544eefd7b3f9fc2"} Nov 22 07:37:21 crc kubenswrapper[4853]: E1122 07:37:21.832581 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Nov 22 07:37:21 crc kubenswrapper[4853]: E1122 07:37:21.832836 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbhd5h574h5bch658hbh65h6dh67fh55bh666h84h5fchf5h65fh56dh67bh6h666h597h64h5d8h55chbdh586h4h5ddh9ch5dfhdbh579h66dq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4f9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(a9dc9521-7d6a-4622-9a63-9c761ff0721c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:22 crc kubenswrapper[4853]: I1122 07:37:22.479630 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5f4cf575-47s4q_a0659bb8-90a4-4018-b1a5-64d307a50dcd/console/0.log" Nov 22 07:37:22 crc kubenswrapper[4853]: I1122 07:37:22.479697 4853 generic.go:334] "Generic (PLEG): container finished" podID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerID="ce34071f8bfdc0c83adea546339db47ef1dc168ff80bdb05db3fb5acc9181e0a" exitCode=2 Nov 22 07:37:22 crc kubenswrapper[4853]: I1122 07:37:22.479802 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5f4cf575-47s4q" event={"ID":"a0659bb8-90a4-4018-b1a5-64d307a50dcd","Type":"ContainerDied","Data":"ce34071f8bfdc0c83adea546339db47ef1dc168ff80bdb05db3fb5acc9181e0a"} Nov 22 07:37:24 crc kubenswrapper[4853]: E1122 07:37:24.050716 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" Nov 22 07:37:24 crc kubenswrapper[4853]: E1122 07:37:24.054092 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:37:24 crc kubenswrapper[4853]: E1122 07:37:24.054282 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrrz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d0e9072b-3e2a-4283-a697-8411049c5161): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:37:24 crc kubenswrapper[4853]: E1122 07:37:24.055445 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:37:29 crc kubenswrapper[4853]: I1122 07:37:29.593832 4853 patch_prober.go:28] interesting pod/console-c5f4cf575-47s4q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" start-of-body= Nov 22 07:37:29 crc kubenswrapper[4853]: I1122 07:37:29.594918 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-c5f4cf575-47s4q" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": dial tcp 10.217.0.90:8443: connect: connection refused" Nov 22 07:37:31 crc kubenswrapper[4853]: I1122 07:37:31.297788 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:37:31 crc kubenswrapper[4853]: I1122 07:37:31.297878 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:37:37 crc kubenswrapper[4853]: E1122 07:37:37.095173 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:37:40 crc kubenswrapper[4853]: I1122 07:37:40.593045 4853 patch_prober.go:28] interesting pod/console-c5f4cf575-47s4q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:37:40 crc kubenswrapper[4853]: I1122 07:37:40.594936 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-c5f4cf575-47s4q" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:37:44 crc kubenswrapper[4853]: I1122 07:37:44.751235 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerStarted","Data":"de9382875e576601d65403d94d2a97424a765bdae93ee92a405d4d66a2d746fd"} Nov 22 07:37:50 crc kubenswrapper[4853]: I1122 07:37:50.592545 4853 patch_prober.go:28] interesting pod/console-c5f4cf575-47s4q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:37:50 crc kubenswrapper[4853]: I1122 07:37:50.593247 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-c5f4cf575-47s4q" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:37:50 crc kubenswrapper[4853]: I1122 07:37:50.593372 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:37:53 crc kubenswrapper[4853]: I1122 07:37:53.851483 4853 generic.go:334] "Generic (PLEG): container finished" podID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerID="de9382875e576601d65403d94d2a97424a765bdae93ee92a405d4d66a2d746fd" exitCode=0 Nov 22 07:37:53 crc kubenswrapper[4853]: I1122 07:37:53.851561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerDied","Data":"de9382875e576601d65403d94d2a97424a765bdae93ee92a405d4d66a2d746fd"} Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.073146 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.074263 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4h8m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xsw8l_openstack(89c6d393-491f-477d-8d77-5a14ae67ed3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.075442 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.084927 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.085165 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vj4lr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:00 crc kubenswrapper[4853]: E1122 07:38:00.086634 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.192230 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5f4cf575-47s4q_a0659bb8-90a4-4018-b1a5-64d307a50dcd/console/0.log" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.192650 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.392491 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.392569 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6hxj\" (UniqueName: \"kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.392653 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.392798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.392861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.393074 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.393168 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle\") pod \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\" (UID: \"a0659bb8-90a4-4018-b1a5-64d307a50dcd\") " Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.395282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca" (OuterVolumeSpecName: "service-ca") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.395533 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config" (OuterVolumeSpecName: "console-config") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.395584 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.395841 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.414871 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.415210 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj" (OuterVolumeSpecName: "kube-api-access-f6hxj") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "kube-api-access-f6hxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.415626 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a0659bb8-90a4-4018-b1a5-64d307a50dcd" (UID: "a0659bb8-90a4-4018-b1a5-64d307a50dcd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496689 4853 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496737 4853 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496758 4853 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496771 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6hxj\" (UniqueName: \"kubernetes.io/projected/a0659bb8-90a4-4018-b1a5-64d307a50dcd-kube-api-access-f6hxj\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496788 4853 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496797 4853 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0659bb8-90a4-4018-b1a5-64d307a50dcd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.496804 4853 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0659bb8-90a4-4018-b1a5-64d307a50dcd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.592659 4853 patch_prober.go:28] interesting pod/console-c5f4cf575-47s4q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.592772 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-c5f4cf575-47s4q" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.90:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.934393 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-c5f4cf575-47s4q_a0659bb8-90a4-4018-b1a5-64d307a50dcd/console/0.log" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.934482 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-c5f4cf575-47s4q" event={"ID":"a0659bb8-90a4-4018-b1a5-64d307a50dcd","Type":"ContainerDied","Data":"a62ba506cb39568c8942aa6f83a260f00e9432bb6ae351a0af99194f760212d5"} Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.934538 4853 scope.go:117] "RemoveContainer" containerID="ce34071f8bfdc0c83adea546339db47ef1dc168ff80bdb05db3fb5acc9181e0a" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.934829 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-c5f4cf575-47s4q" Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.974335 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:38:00 crc kubenswrapper[4853]: I1122 07:38:00.983369 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-c5f4cf575-47s4q"] Nov 22 07:38:01 crc kubenswrapper[4853]: I1122 07:38:01.297920 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:38:01 crc kubenswrapper[4853]: I1122 07:38:01.298003 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:01 crc kubenswrapper[4853]: I1122 07:38:01.765172 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" path="/var/lib/kubelet/pods/a0659bb8-90a4-4018-b1a5-64d307a50dcd/volumes" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.496721 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gcfs8"] Nov 22 07:38:14 crc kubenswrapper[4853]: E1122 07:38:14.497828 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.497844 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.498102 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0659bb8-90a4-4018-b1a5-64d307a50dcd" containerName="console" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.498942 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.504793 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.526659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gcfs8"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovn-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-combined-ca-bundle\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovs-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppvw\" (UniqueName: \"kubernetes.io/projected/2d4565ad-c87f-4e82-bd22-0218b0598651-kube-api-access-gppvw\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4565ad-c87f-4e82-bd22-0218b0598651-config\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.650815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.684291 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.715612 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.720313 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.725199 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.744568 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4565ad-c87f-4e82-bd22-0218b0598651-config\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753589 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovn-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753785 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-combined-ca-bundle\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovs-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.753856 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppvw\" (UniqueName: \"kubernetes.io/projected/2d4565ad-c87f-4e82-bd22-0218b0598651-kube-api-access-gppvw\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.755348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovn-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.755446 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/2d4565ad-c87f-4e82-bd22-0218b0598651-ovs-rundir\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.756452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4565ad-c87f-4e82-bd22-0218b0598651-config\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.796600 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.797672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4565ad-c87f-4e82-bd22-0218b0598651-combined-ca-bundle\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.836736 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppvw\" (UniqueName: \"kubernetes.io/projected/2d4565ad-c87f-4e82-bd22-0218b0598651-kube-api-access-gppvw\") pod \"ovn-controller-metrics-gcfs8\" (UID: \"2d4565ad-c87f-4e82-bd22-0218b0598651\") " pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.839098 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gcfs8" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.870090 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtptl\" (UniqueName: \"kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.870321 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.870413 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.870649 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.880624 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.917310 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.920010 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.924408 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.930226 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfjhl\" (UniqueName: \"kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974485 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974591 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtptl\" (UniqueName: \"kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974660 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974694 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.974715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.975383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.975638 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.976101 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:14 crc kubenswrapper[4853]: I1122 07:38:14.994966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtptl\" (UniqueName: \"kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl\") pod \"dnsmasq-dns-5bf47b49b7-bmjtw\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.053941 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.077357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfjhl\" (UniqueName: \"kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.077804 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.078091 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.078278 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.078406 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.078801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.079145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.079370 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.079527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.098768 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfjhl\" (UniqueName: \"kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl\") pod \"dnsmasq-dns-8554648995-kcrgb\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:15 crc kubenswrapper[4853]: I1122 07:38:15.244831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:16 crc kubenswrapper[4853]: E1122 07:38:16.769200 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Nov 22 07:38:16 crc kubenswrapper[4853]: E1122 07:38:16.769780 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644hdfh556h69h666h59ch5cfh97h5h54dh65fh5d9h59h5c4h7dh55fh65dhbhc7h5dh59fh5cch5c9h56bh68chbhc8h75hc6h565h566h654q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6z67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-nhs2x_openstack(05c9113f-59ff-46cc-b704-eb9c8553ad37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:16 crc kubenswrapper[4853]: E1122 07:38:16.770943 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-nhs2x" podUID="05c9113f-59ff-46cc-b704-eb9c8553ad37" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.070806 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.071891 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qmpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(2eadd806-7143-46ba-9e49-f19ac0bd52bd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.073554 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.075523 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.075853 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrrz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(d0e9072b-3e2a-4283-a697-8411049c5161): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.077079 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.425513 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.425854 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n557h5dfh5b6h5b7h8ch66hd6h586hdch57hb5h5c4h8bh57dh68fh64ch654h664hb8h588h64fhcdh688h86hf8h584hbfh5c7h88h665h5d6hfcq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-245m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(ef57a60a-7a73-45c6-8760-7e215eedd374): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.785282 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.785645 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.785917 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmlcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(573160d1-5593-42ee-906a-44b4fbc5abe4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Nov 22 07:38:17 crc kubenswrapper[4853]: E1122 07:38:17.787265 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" Nov 22 07:38:18 crc kubenswrapper[4853]: I1122 07:38:18.331990 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gcfs8"] Nov 22 07:38:18 crc kubenswrapper[4853]: I1122 07:38:18.356694 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:38:18 crc kubenswrapper[4853]: W1122 07:38:18.358360 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8084cec1_a543_4ad8_814a_d907ee68e2d5.slice/crio-4f3e1ed5374a7d2ef6177c837f7175625ec62d66fafef792adcdc929f6bcc669 WatchSource:0}: Error finding container 4f3e1ed5374a7d2ef6177c837f7175625ec62d66fafef792adcdc929f6bcc669: Status 404 returned error can't find the container with id 4f3e1ed5374a7d2ef6177c837f7175625ec62d66fafef792adcdc929f6bcc669 Nov 22 07:38:18 crc kubenswrapper[4853]: I1122 07:38:18.567296 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.149464 4853 generic.go:334] "Generic (PLEG): container finished" podID="89c6d393-491f-477d-8d77-5a14ae67ed3b" containerID="b77680678f701ea71310296a790cc4395daaec513ba30477aef87bc5565ac4c2" exitCode=0 Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.149785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" event={"ID":"89c6d393-491f-477d-8d77-5a14ae67ed3b","Type":"ContainerDied","Data":"b77680678f701ea71310296a790cc4395daaec513ba30477aef87bc5565ac4c2"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.153544 4853 generic.go:334] "Generic (PLEG): container finished" podID="3cb42f37-eef6-4874-acad-7bcf2dd29078" containerID="b5f9754370a0d1981a3e89636825cb275b006f7655d5fdd88ac470b2dc42a9b6" exitCode=0 Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.153594 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" event={"ID":"3cb42f37-eef6-4874-acad-7bcf2dd29078","Type":"ContainerDied","Data":"b5f9754370a0d1981a3e89636825cb275b006f7655d5fdd88ac470b2dc42a9b6"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.157572 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tv8h9" event={"ID":"cae818e5-34d5-43c7-95af-e82e21309758","Type":"ContainerStarted","Data":"eb4e0416a833bcf9955ea3cd792e558636579a1756fe6a31be94c6529330ffe8"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.160131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gcfs8" event={"ID":"2d4565ad-c87f-4e82-bd22-0218b0598651","Type":"ContainerStarted","Data":"d5e82b755412ff93242c3d9d81af0ee5960ac6f1e507f440df7ecb6d73461184"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.161215 4853 generic.go:334] "Generic (PLEG): container finished" podID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerID="226128eec8da970656512a0f9cc960e027fb45e283aa1bff058fa09121b2498a" exitCode=0 Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.161290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kcrgb" event={"ID":"8084cec1-a543-4ad8-814a-d907ee68e2d5","Type":"ContainerDied","Data":"226128eec8da970656512a0f9cc960e027fb45e283aa1bff058fa09121b2498a"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.161310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kcrgb" event={"ID":"8084cec1-a543-4ad8-814a-d907ee68e2d5","Type":"ContainerStarted","Data":"4f3e1ed5374a7d2ef6177c837f7175625ec62d66fafef792adcdc929f6bcc669"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.165038 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"b64e6703-1b51-477a-8898-3646dbf7b00c","Type":"ContainerStarted","Data":"36ade6993b21de610bd8533f77e6a563ec0d279aadcd2e4e252d9c379b69bb14"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.165293 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.173311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k99wz" event={"ID":"e573b0f6-8f5e-45a9-b00e-410826a9a36d","Type":"ContainerStarted","Data":"60ce9f574722ead7c3ba368febd51195cd819be43bd41235d7f6a819dc3e00a2"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.176991 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" event={"ID":"1dcce692-834e-48e2-bcfd-7c0f05480fb4","Type":"ContainerStarted","Data":"4106110e6756498834913dc3f2c5cbc001ce0202b46311007e2cbcccf7bad9db"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.179380 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerStarted","Data":"5383c9d38d4fcb9dde7b51f5cdb23b67bef89441f6cc49a3fc8a333d5d8bdea6"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.186234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d","Type":"ContainerStarted","Data":"bdfaa01e104a8d42cfd9b8df7abda888ee5ce29ec9112f1dfc4351ae1874c41a"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.196847 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"410e418b-aee9-40c9-96ed-0f8c5c882148","Type":"ContainerStarted","Data":"1821e6e7cd29212565bafae278919c442dc6373d1091430f10eb797e36f6463a"} Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.228394 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tv8h9" podStartSLOduration=14.939781801 podStartE2EDuration="2m20.228364202s" podCreationTimestamp="2025-11-22 07:35:59 +0000 UTC" firstStartedPulling="2025-11-22 07:36:12.453399686 +0000 UTC m=+1571.294022352" lastFinishedPulling="2025-11-22 07:38:17.741982117 +0000 UTC m=+1696.582604753" observedRunningTime="2025-11-22 07:38:19.199217064 +0000 UTC m=+1698.039839710" watchObservedRunningTime="2025-11-22 07:38:19.228364202 +0000 UTC m=+1698.068986828" Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.236065 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.262958985 podStartE2EDuration="2m18.236035867s" podCreationTimestamp="2025-11-22 07:36:01 +0000 UTC" firstStartedPulling="2025-11-22 07:36:02.796157701 +0000 UTC m=+1561.636780327" lastFinishedPulling="2025-11-22 07:38:16.769234573 +0000 UTC m=+1695.609857209" observedRunningTime="2025-11-22 07:38:19.23127672 +0000 UTC m=+1698.071899356" watchObservedRunningTime="2025-11-22 07:38:19.236035867 +0000 UTC m=+1698.076658493" Nov 22 07:38:19 crc kubenswrapper[4853]: I1122 07:38:19.392431 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-jlcqj" podStartSLOduration=120.426144075 podStartE2EDuration="2m15.392406554s" podCreationTimestamp="2025-11-22 07:36:04 +0000 UTC" firstStartedPulling="2025-11-22 07:36:52.154250008 +0000 UTC m=+1610.994872634" lastFinishedPulling="2025-11-22 07:37:07.120512487 +0000 UTC m=+1625.961135113" observedRunningTime="2025-11-22 07:38:19.382848919 +0000 UTC m=+1698.223471545" watchObservedRunningTime="2025-11-22 07:38:19.392406554 +0000 UTC m=+1698.233029180" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.705456 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.817397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc\") pod \"3cb42f37-eef6-4874-acad-7bcf2dd29078\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.817456 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config\") pod \"3cb42f37-eef6-4874-acad-7bcf2dd29078\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.817835 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gff7\" (UniqueName: \"kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7\") pod \"3cb42f37-eef6-4874-acad-7bcf2dd29078\" (UID: \"3cb42f37-eef6-4874-acad-7bcf2dd29078\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.831334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7" (OuterVolumeSpecName: "kube-api-access-9gff7") pod "3cb42f37-eef6-4874-acad-7bcf2dd29078" (UID: "3cb42f37-eef6-4874-acad-7bcf2dd29078"). InnerVolumeSpecName "kube-api-access-9gff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.847587 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config" (OuterVolumeSpecName: "config") pod "3cb42f37-eef6-4874-acad-7bcf2dd29078" (UID: "3cb42f37-eef6-4874-acad-7bcf2dd29078"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.920740 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.920802 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gff7\" (UniqueName: \"kubernetes.io/projected/3cb42f37-eef6-4874-acad-7bcf2dd29078-kube-api-access-9gff7\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.924932 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:19.946193 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3cb42f37-eef6-4874-acad-7bcf2dd29078" (UID: "3cb42f37-eef6-4874-acad-7bcf2dd29078"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.023316 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc\") pod \"89c6d393-491f-477d-8d77-5a14ae67ed3b\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.023408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h8m4\" (UniqueName: \"kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4\") pod \"89c6d393-491f-477d-8d77-5a14ae67ed3b\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.023505 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config\") pod \"89c6d393-491f-477d-8d77-5a14ae67ed3b\" (UID: \"89c6d393-491f-477d-8d77-5a14ae67ed3b\") " Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.024307 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3cb42f37-eef6-4874-acad-7bcf2dd29078-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.044414 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4" (OuterVolumeSpecName: "kube-api-access-4h8m4") pod "89c6d393-491f-477d-8d77-5a14ae67ed3b" (UID: "89c6d393-491f-477d-8d77-5a14ae67ed3b"). InnerVolumeSpecName "kube-api-access-4h8m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.049044 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "89c6d393-491f-477d-8d77-5a14ae67ed3b" (UID: "89c6d393-491f-477d-8d77-5a14ae67ed3b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.069343 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config" (OuterVolumeSpecName: "config") pod "89c6d393-491f-477d-8d77-5a14ae67ed3b" (UID: "89c6d393-491f-477d-8d77-5a14ae67ed3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.127360 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.127864 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h8m4\" (UniqueName: \"kubernetes.io/projected/89c6d393-491f-477d-8d77-5a14ae67ed3b-kube-api-access-4h8m4\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.127884 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6d393-491f-477d-8d77-5a14ae67ed3b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.212207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" event={"ID":"89c6d393-491f-477d-8d77-5a14ae67ed3b","Type":"ContainerDied","Data":"868b1f2e310d7df86023f3c7b7fae5e58b3cfb278fec1ff333a668a684f45713"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.212247 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsw8l" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.212291 4853 scope.go:117] "RemoveContainer" containerID="b77680678f701ea71310296a790cc4395daaec513ba30477aef87bc5565ac4c2" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.215828 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.215864 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwph9" event={"ID":"3cb42f37-eef6-4874-acad-7bcf2dd29078","Type":"ContainerDied","Data":"d988e33ace2e18f599b985de5d600df9a3c34c4807553304bef591ce18763b59"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.301904 4853 scope.go:117] "RemoveContainer" containerID="b5f9754370a0d1981a3e89636825cb275b006f7655d5fdd88ac470b2dc42a9b6" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.316801 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.318080 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.319049 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.331901 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsw8l"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.358806 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:20.367000 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwph9"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:21.229495 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerStarted","Data":"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:21.405708 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:38:37 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:38:37 crc kubenswrapper[4853]: > Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:21.771192 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" path="/var/lib/kubelet/pods/3cb42f37-eef6-4874-acad-7bcf2dd29078/volumes" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:21.772842 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" path="/var/lib/kubelet/pods/89c6d393-491f-477d-8d77-5a14ae67ed3b/volumes" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.255261 4853 generic.go:334] "Generic (PLEG): container finished" podID="e573b0f6-8f5e-45a9-b00e-410826a9a36d" containerID="60ce9f574722ead7c3ba368febd51195cd819be43bd41235d7f6a819dc3e00a2" exitCode=0 Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.255381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k99wz" event={"ID":"e573b0f6-8f5e-45a9-b00e-410826a9a36d","Type":"ContainerDied","Data":"60ce9f574722ead7c3ba368febd51195cd819be43bd41235d7f6a819dc3e00a2"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.259050 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerID="e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b" exitCode=0 Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.259188 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerDied","Data":"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.262717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kcrgb" event={"ID":"8084cec1-a543-4ad8-814a-d907ee68e2d5","Type":"ContainerStarted","Data":"014547175eb62a7b6be53dbcd8652831b7f8bbb143d5df95f322fea9c6a8f14b"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.262812 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:22.326062 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-kcrgb" podStartSLOduration=8.326033828 podStartE2EDuration="8.326033828s" podCreationTimestamp="2025-11-22 07:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:38:22.306096356 +0000 UTC m=+1701.146719002" watchObservedRunningTime="2025-11-22 07:38:22.326033828 +0000 UTC m=+1701.166656444" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:27.236557 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:28.750997 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:29.749956 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:30.247089 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:30.319271 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:30.750956 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.297154 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.297237 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.297294 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.297977 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.298036 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" gracePeriod=600 Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:31.375508 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:38:37 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:38:37 crc kubenswrapper[4853]: > Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:31.750371 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-nhs2x" podUID="05c9113f-59ff-46cc-b704-eb9c8553ad37" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:33.388954 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" exitCode=0 Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:33.389066 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1"} Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:33.389513 4853 scope.go:117] "RemoveContainer" containerID="a94379b7240c320a54475e30e875758eec0fc5f02dfe1040038fbc1ac77b62e7" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.269777 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:34.270416 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.270435 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:34.270466 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.270474 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.270718 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c6d393-491f-477d-8d77-5a14ae67ed3b" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.270741 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cb42f37-eef6-4874-acad-7bcf2dd29078" containerName="init" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.272223 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.295857 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.313256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.313321 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n9pr\" (UniqueName: \"kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.313387 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.313587 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.313658 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.416305 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.416389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.416449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.416471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n9pr\" (UniqueName: \"kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.416499 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.417379 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.417470 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.417966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.418480 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.438738 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n9pr\" (UniqueName: \"kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr\") pod \"dnsmasq-dns-b8fbc5445-9qbfw\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:34.604198 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.367300 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.375533 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.378816 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.383251 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.383414 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fg9dx" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.383513 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.390101 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.437070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-cache\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.437381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-lock\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.437416 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.437481 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2qcl\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-kube-api-access-b2qcl\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.437506 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.540580 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-cache\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.540859 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-lock\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541188 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-cache\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541303 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d4427668-9ef6-4594-ae35-ff983a6af324-lock\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541399 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541419 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2qcl\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-kube-api-access-b2qcl\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541447 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.541846 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:35.542175 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:35.542228 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:35.542343 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:38:36.042305916 +0000 UTC m=+1714.882928582 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.567895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2qcl\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-kube-api-access-b2qcl\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.580102 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.995825 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-b8h4v"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:35.997807 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.011518 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b8h4v"] Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.011712 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.011842 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.014284 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061370 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061426 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061487 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061616 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.061668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nts4t\" (UniqueName: \"kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:36.061992 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:36.062018 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:36.062074 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:38:37.06204842 +0000 UTC m=+1715.902671046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164110 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nts4t\" (UniqueName: \"kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164363 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164396 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164503 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.164537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.165183 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.165672 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.168417 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.170979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.172985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.183512 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nts4t\" (UniqueName: \"kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t\") pod \"swift-ring-rebalance-b8h4v\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:36.343888 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:38:37 crc kubenswrapper[4853]: I1122 07:38:37.087603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:37.087967 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:37.088019 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:37 crc kubenswrapper[4853]: E1122 07:38:37.088110 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:38:39.088075748 +0000 UTC m=+1717.928698374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:39 crc kubenswrapper[4853]: I1122 07:38:39.142797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:39 crc kubenswrapper[4853]: E1122 07:38:39.142975 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:39 crc kubenswrapper[4853]: E1122 07:38:39.143237 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:39 crc kubenswrapper[4853]: E1122 07:38:39.143311 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:38:43.143288337 +0000 UTC m=+1721.983910973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:39 crc kubenswrapper[4853]: E1122 07:38:39.751533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:38:40 crc kubenswrapper[4853]: E1122 07:38:40.749564 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:38:41 crc kubenswrapper[4853]: I1122 07:38:41.363575 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:38:41 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:38:41 crc kubenswrapper[4853]: > Nov 22 07:38:43 crc kubenswrapper[4853]: I1122 07:38:43.153352 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:43 crc kubenswrapper[4853]: E1122 07:38:43.153590 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:43 crc kubenswrapper[4853]: E1122 07:38:43.153986 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:43 crc kubenswrapper[4853]: E1122 07:38:43.154066 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:38:51.154043044 +0000 UTC m=+1729.994665670 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:51 crc kubenswrapper[4853]: I1122 07:38:51.246004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:38:51 crc kubenswrapper[4853]: E1122 07:38:51.246237 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:38:51 crc kubenswrapper[4853]: E1122 07:38:51.248680 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:38:51 crc kubenswrapper[4853]: E1122 07:38:51.248820 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:07.248788545 +0000 UTC m=+1746.089411181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:38:51 crc kubenswrapper[4853]: I1122 07:38:51.365569 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:38:51 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:38:51 crc kubenswrapper[4853]: > Nov 22 07:38:52 crc kubenswrapper[4853]: E1122 07:38:52.750401 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" Nov 22 07:38:53 crc kubenswrapper[4853]: E1122 07:38:53.750626 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" Nov 22 07:38:58 crc kubenswrapper[4853]: E1122 07:38:58.909034 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:38:59 crc kubenswrapper[4853]: I1122 07:38:59.683466 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:38:59 crc kubenswrapper[4853]: E1122 07:38:59.683876 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.083247 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3391146991/2\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.084893 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5d6h7h557h68h58dh9bh59h656h96h5d5h644h54fh645h6bh5bbh67dh55fh699h64bh57ch58ch64bh5dch5b6h96h5fbh59bh687h7chc8h89h58dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gppvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-gcfs8_openstack(2d4565ad-c87f-4e82-bd22-0218b0598651): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3391146991/2\": happened during read: context canceled" logger="UnhandledError" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.087020 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3391146991/2\\\": happened during read: context canceled\"" pod="openstack/ovn-controller-metrics-gcfs8" podUID="2d4565ad-c87f-4e82-bd22-0218b0598651" Nov 22 07:39:01 crc kubenswrapper[4853]: I1122 07:39:01.367045 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:39:01 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:39:01 crc kubenswrapper[4853]: > Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.616318 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.617007 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n557h5dfh5b6h5b7h8ch66hd6h586hdch57hb5h5c4h8bh57dh68fh64ch654h664hb8h588h64fhcdh688h86hf8h584hbfh5c7h88h665h5d6hfcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-245m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(ef57a60a-7a73-45c6-8760-7e215eedd374): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.618342 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.685113 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.685430 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:ncbhd5h574h5bch658hbh65h6dh67fh55bh666h84h5fchf5h65fh56dh67bh6h666h597h64h5d8h55chbdh586h4h5ddh9ch5dfhdbh579h66dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4f9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(a9dc9521-7d6a-4622-9a63-9c761ff0721c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.686832 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:01 crc kubenswrapper[4853]: E1122 07:39:01.717428 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-gcfs8" podUID="2d4565ad-c87f-4e82-bd22-0218b0598651" Nov 22 07:39:02 crc kubenswrapper[4853]: I1122 07:39:02.349009 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:39:02 crc kubenswrapper[4853]: I1122 07:39:02.408459 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-b8h4v"] Nov 22 07:39:02 crc kubenswrapper[4853]: I1122 07:39:02.749307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" event={"ID":"000b20b5-bfcd-44c2-9859-bb30ff5d5123","Type":"ContainerStarted","Data":"b3857a3965c4a9734da168e13e6c80a4f32914a0f64bab9d6370f0dd69c30c9b"} Nov 22 07:39:02 crc kubenswrapper[4853]: I1122 07:39:02.755569 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8h4v" event={"ID":"7268d91f-27a0-45a1-8239-b6bdc8736b4b","Type":"ContainerStarted","Data":"a081c59efa267223ceef61e5662fce9ca7ee6184314403fd881c467b9ec46d1f"} Nov 22 07:39:03 crc kubenswrapper[4853]: I1122 07:39:03.784296 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerStarted","Data":"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7"} Nov 22 07:39:03 crc kubenswrapper[4853]: I1122 07:39:03.786701 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k99wz" event={"ID":"e573b0f6-8f5e-45a9-b00e-410826a9a36d","Type":"ContainerStarted","Data":"6acf501a84816edcb94d5c7b98b1532e32f450034b4a53ec93444c2dad9cbccc"} Nov 22 07:39:03 crc kubenswrapper[4853]: I1122 07:39:03.789348 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerStarted","Data":"c730856d57cddcca52937e3cd3260af7023f4f1504c63c6735ab861b2b8563c7"} Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.814541 4853 generic.go:334] "Generic (PLEG): container finished" podID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerID="da4d30664f2e6272fc1883499e673c5bd12e9edcf41095153fa275cdac07510e" exitCode=0 Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.814597 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" event={"ID":"000b20b5-bfcd-44c2-9859-bb30ff5d5123","Type":"ContainerDied","Data":"da4d30664f2e6272fc1883499e673c5bd12e9edcf41095153fa275cdac07510e"} Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.827399 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nhs2x" event={"ID":"05c9113f-59ff-46cc-b704-eb9c8553ad37","Type":"ContainerStarted","Data":"42a1b714bcfb6db721a1321cb09c3f96c26c95c810291abb1a5bab4c6b083a45"} Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.827497 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="dnsmasq-dns" containerID="cri-o://bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7" gracePeriod=10 Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.827588 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.881315 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" podStartSLOduration=50.881291581 podStartE2EDuration="50.881291581s" podCreationTimestamp="2025-11-22 07:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:04.866507729 +0000 UTC m=+1743.707130355" watchObservedRunningTime="2025-11-22 07:39:04.881291581 +0000 UTC m=+1743.721914207" Nov 22 07:39:04 crc kubenswrapper[4853]: I1122 07:39:04.904331 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-nhs2x" podStartSLOduration=47.295166884 podStartE2EDuration="2m56.90430076s" podCreationTimestamp="2025-11-22 07:36:08 +0000 UTC" firstStartedPulling="2025-11-22 07:36:52.155027399 +0000 UTC m=+1610.995650025" lastFinishedPulling="2025-11-22 07:39:01.764161275 +0000 UTC m=+1740.604783901" observedRunningTime="2025-11-22 07:39:04.887419803 +0000 UTC m=+1743.728042439" watchObservedRunningTime="2025-11-22 07:39:04.90430076 +0000 UTC m=+1743.744923396" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.337663 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:39:05 crc kubenswrapper[4853]: E1122 07:39:05.357382 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:05 crc kubenswrapper[4853]: E1122 07:39:05.419546 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.477226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtptl\" (UniqueName: \"kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl\") pod \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.477438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config\") pod \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.477543 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc\") pod \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.477727 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb\") pod \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\" (UID: \"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61\") " Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.485364 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl" (OuterVolumeSpecName: "kube-api-access-mtptl") pod "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" (UID: "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61"). InnerVolumeSpecName "kube-api-access-mtptl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.540010 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" (UID: "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.543440 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config" (OuterVolumeSpecName: "config") pod "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" (UID: "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.546984 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" (UID: "ec7d4ab7-a342-4408-b10a-8ac8a59e3e61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.581667 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.582087 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.582163 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtptl\" (UniqueName: \"kubernetes.io/projected/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-kube-api-access-mtptl\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.582215 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.842055 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k99wz" event={"ID":"e573b0f6-8f5e-45a9-b00e-410826a9a36d","Type":"ContainerStarted","Data":"22d5625daba149455eb595e6b37330eb543771a95b8cdb93b34545a49b872c3c"} Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.843066 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.843143 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.845352 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"573160d1-5593-42ee-906a-44b4fbc5abe4","Type":"ContainerStarted","Data":"67599a7a5981d6d4054a2c3fb6d72a75ee4653bef9ac1b3f2df7845a30f145ae"} Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.845668 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.855918 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerID="bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7" exitCode=0 Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.855989 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.856017 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerDied","Data":"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7"} Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.856083 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-bmjtw" event={"ID":"ec7d4ab7-a342-4408-b10a-8ac8a59e3e61","Type":"ContainerDied","Data":"5383c9d38d4fcb9dde7b51f5cdb23b67bef89441f6cc49a3fc8a333d5d8bdea6"} Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.856110 4853 scope.go:117] "RemoveContainer" containerID="bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.860099 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9dc9521-7d6a-4622-9a63-9c761ff0721c","Type":"ContainerStarted","Data":"f0999539125f0ed0143d8bcbd97a9604f0c17e76c72a01b8d06a450c6b2f4d47"} Nov 22 07:39:05 crc kubenswrapper[4853]: E1122 07:39:05.862875 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.872015 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef57a60a-7a73-45c6-8760-7e215eedd374","Type":"ContainerStarted","Data":"0e80c3c563cbc594f7ffb4e9ce78d14ca25bf9abaee3404cc805ce04c9e039b4"} Nov 22 07:39:05 crc kubenswrapper[4853]: E1122 07:39:05.876712 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.893302 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" event={"ID":"000b20b5-bfcd-44c2-9859-bb30ff5d5123","Type":"ContainerStarted","Data":"02e619043bb2a42b286d9f3afb1e5b6b88da2a3c334f5b3fce805ca3c7a0d57a"} Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.894064 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k99wz" podStartSLOduration=94.963393835 podStartE2EDuration="2m57.894029593s" podCreationTimestamp="2025-11-22 07:36:08 +0000 UTC" firstStartedPulling="2025-11-22 07:36:54.824332926 +0000 UTC m=+1613.664955552" lastFinishedPulling="2025-11-22 07:38:17.754968684 +0000 UTC m=+1696.595591310" observedRunningTime="2025-11-22 07:39:05.866713589 +0000 UTC m=+1744.707336225" watchObservedRunningTime="2025-11-22 07:39:05.894029593 +0000 UTC m=+1744.734652219" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.896109 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.947439 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.959167 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-bmjtw"] Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.968366 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=12.859429869 podStartE2EDuration="3m2.968340241s" podCreationTimestamp="2025-11-22 07:36:03 +0000 UTC" firstStartedPulling="2025-11-22 07:36:12.469686629 +0000 UTC m=+1571.310309255" lastFinishedPulling="2025-11-22 07:39:02.578597001 +0000 UTC m=+1741.419219627" observedRunningTime="2025-11-22 07:39:05.947774166 +0000 UTC m=+1744.788396792" watchObservedRunningTime="2025-11-22 07:39:05.968340241 +0000 UTC m=+1744.808962867" Nov 22 07:39:05 crc kubenswrapper[4853]: I1122 07:39:05.999233 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" podStartSLOduration=31.999206148 podStartE2EDuration="31.999206148s" podCreationTimestamp="2025-11-22 07:38:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:05.983695347 +0000 UTC m=+1744.824317973" watchObservedRunningTime="2025-11-22 07:39:05.999206148 +0000 UTC m=+1744.839828784" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.008209 4853 scope.go:117] "RemoveContainer" containerID="e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.033619 4853 scope.go:117] "RemoveContainer" containerID="bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7" Nov 22 07:39:06 crc kubenswrapper[4853]: E1122 07:39:06.034266 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7\": container with ID starting with bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7 not found: ID does not exist" containerID="bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.034306 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7"} err="failed to get container status \"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7\": rpc error: code = NotFound desc = could not find container \"bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7\": container with ID starting with bd2d92a5ad3b5dd15a83858f552328b636d10be28690df6eeb0f208a553984a7 not found: ID does not exist" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.034331 4853 scope.go:117] "RemoveContainer" containerID="e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b" Nov 22 07:39:06 crc kubenswrapper[4853]: E1122 07:39:06.034802 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b\": container with ID starting with e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b not found: ID does not exist" containerID="e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.034829 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b"} err="failed to get container status \"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b\": rpc error: code = NotFound desc = could not find container \"e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b\": container with ID starting with e2c3dc33e7146be0a4c7cf4bf0c8439abf0432d543dd16fc800460eabe16527b not found: ID does not exist" Nov 22 07:39:06 crc kubenswrapper[4853]: I1122 07:39:06.797035 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:06 crc kubenswrapper[4853]: E1122 07:39:06.910314 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:07 crc kubenswrapper[4853]: I1122 07:39:07.351364 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:39:07 crc kubenswrapper[4853]: E1122 07:39:07.351765 4853 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:39:07 crc kubenswrapper[4853]: E1122 07:39:07.351803 4853 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:39:07 crc kubenswrapper[4853]: E1122 07:39:07.351893 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift podName:d4427668-9ef6-4594-ae35-ff983a6af324 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:39.351862192 +0000 UTC m=+1778.192484818 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift") pod "swift-storage-0" (UID: "d4427668-9ef6-4594-ae35-ff983a6af324") : configmap "swift-ring-files" not found Nov 22 07:39:07 crc kubenswrapper[4853]: I1122 07:39:07.762346 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" path="/var/lib/kubelet/pods/ec7d4ab7-a342-4408-b10a-8ac8a59e3e61/volumes" Nov 22 07:39:08 crc kubenswrapper[4853]: E1122 07:39:08.589874 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5f90cd_e8e9_489e_b7fd_fde9fd9c342d.slice/crio-bdfaa01e104a8d42cfd9b8df7abda888ee5ce29ec9112f1dfc4351ae1874c41a.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.764612 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.764857 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 07:39:08 crc kubenswrapper[4853]: E1122 07:39:08.768006 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.796915 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:08 crc kubenswrapper[4853]: E1122 07:39:08.819324 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.891534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.895303 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.913101 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-nhs2x" Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.939396 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerStarted","Data":"50e730253dded81d2c23de4f531a2e149403209da74286c4428fa65af80c88bb"} Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.948717 4853 generic.go:334] "Generic (PLEG): container finished" podID="fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d" containerID="bdfaa01e104a8d42cfd9b8df7abda888ee5ce29ec9112f1dfc4351ae1874c41a" exitCode=0 Nov 22 07:39:08 crc kubenswrapper[4853]: I1122 07:39:08.948831 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d","Type":"ContainerDied","Data":"bdfaa01e104a8d42cfd9b8df7abda888ee5ce29ec9112f1dfc4351ae1874c41a"} Nov 22 07:39:08 crc kubenswrapper[4853]: E1122 07:39:08.952024 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="a9dc9521-7d6a-4622-9a63-9c761ff0721c" Nov 22 07:39:08 crc kubenswrapper[4853]: E1122 07:39:08.952827 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:09 crc kubenswrapper[4853]: I1122 07:39:09.964632 4853 generic.go:334] "Generic (PLEG): container finished" podID="410e418b-aee9-40c9-96ed-0f8c5c882148" containerID="1821e6e7cd29212565bafae278919c442dc6373d1091430f10eb797e36f6463a" exitCode=0 Nov 22 07:39:09 crc kubenswrapper[4853]: I1122 07:39:09.964850 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"410e418b-aee9-40c9-96ed-0f8c5c882148","Type":"ContainerDied","Data":"1821e6e7cd29212565bafae278919c442dc6373d1091430f10eb797e36f6463a"} Nov 22 07:39:09 crc kubenswrapper[4853]: E1122 07:39:09.969067 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="ef57a60a-7a73-45c6-8760-7e215eedd374" Nov 22 07:39:10 crc kubenswrapper[4853]: I1122 07:39:10.748171 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:39:10 crc kubenswrapper[4853]: E1122 07:39:10.748734 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:39:11 crc kubenswrapper[4853]: I1122 07:39:11.364159 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:39:11 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:39:11 crc kubenswrapper[4853]: > Nov 22 07:39:11 crc kubenswrapper[4853]: I1122 07:39:11.843537 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 07:39:11 crc kubenswrapper[4853]: I1122 07:39:11.993847 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d","Type":"ContainerStarted","Data":"d74ff68da53a64730285ca0608209ac5411cfd1fa22405dd08bfdfdd6d848e6a"} Nov 22 07:39:11 crc kubenswrapper[4853]: I1122 07:39:11.996957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"410e418b-aee9-40c9-96ed-0f8c5c882148","Type":"ContainerStarted","Data":"27bd1fe5c7d96a2327a6b5c1fd19e257f51461a7611e33a4c422fa5d568ab3b1"} Nov 22 07:39:12 crc kubenswrapper[4853]: I1122 07:39:12.030706 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371844.824102 podStartE2EDuration="3m12.030673969s" podCreationTimestamp="2025-11-22 07:36:00 +0000 UTC" firstStartedPulling="2025-11-22 07:36:02.927633542 +0000 UTC m=+1561.768256168" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:12.024613617 +0000 UTC m=+1750.865236253" watchObservedRunningTime="2025-11-22 07:39:12.030673969 +0000 UTC m=+1750.871296595" Nov 22 07:39:12 crc kubenswrapper[4853]: I1122 07:39:12.052241 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=57.778846029 podStartE2EDuration="3m13.052216049s" podCreationTimestamp="2025-11-22 07:35:59 +0000 UTC" firstStartedPulling="2025-11-22 07:36:02.445808718 +0000 UTC m=+1561.286431334" lastFinishedPulling="2025-11-22 07:38:17.719178728 +0000 UTC m=+1696.559801354" observedRunningTime="2025-11-22 07:39:12.047981097 +0000 UTC m=+1750.888603743" watchObservedRunningTime="2025-11-22 07:39:12.052216049 +0000 UTC m=+1750.892838675" Nov 22 07:39:12 crc kubenswrapper[4853]: I1122 07:39:12.341141 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:12 crc kubenswrapper[4853]: I1122 07:39:12.341281 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:13 crc kubenswrapper[4853]: I1122 07:39:13.803522 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 07:39:14 crc kubenswrapper[4853]: I1122 07:39:14.388569 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:39:14 crc kubenswrapper[4853]: I1122 07:39:14.605970 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:39:14 crc kubenswrapper[4853]: I1122 07:39:14.675132 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:39:14 crc kubenswrapper[4853]: I1122 07:39:14.675495 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-kcrgb" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="dnsmasq-dns" containerID="cri-o://014547175eb62a7b6be53dbcd8652831b7f8bbb143d5df95f322fea9c6a8f14b" gracePeriod=10 Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.046731 4853 generic.go:334] "Generic (PLEG): container finished" podID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerID="014547175eb62a7b6be53dbcd8652831b7f8bbb143d5df95f322fea9c6a8f14b" exitCode=0 Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.046816 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kcrgb" event={"ID":"8084cec1-a543-4ad8-814a-d907ee68e2d5","Type":"ContainerDied","Data":"014547175eb62a7b6be53dbcd8652831b7f8bbb143d5df95f322fea9c6a8f14b"} Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.245461 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-kcrgb" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: connect: connection refused" Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.600951 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.697281 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.697377 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfjhl\" (UniqueName: \"kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.697450 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.697573 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.697612 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.703393 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl" (OuterVolumeSpecName: "kube-api-access-sfjhl") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5"). InnerVolumeSpecName "kube-api-access-sfjhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:15 crc kubenswrapper[4853]: E1122 07:39:15.800306 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc podName:8084cec1-a543-4ad8-814a-d907ee68e2d5 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:16.300209163 +0000 UTC m=+1755.140831799 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "dns-svc" (UniqueName: "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5") : error deleting /var/lib/kubelet/pods/8084cec1-a543-4ad8-814a-d907ee68e2d5/volume-subpaths: remove /var/lib/kubelet/pods/8084cec1-a543-4ad8-814a-d907ee68e2d5/volume-subpaths: no such file or directory Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.800447 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:15 crc kubenswrapper[4853]: E1122 07:39:15.800735 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config podName:8084cec1-a543-4ad8-814a-d907ee68e2d5 nodeName:}" failed. No retries permitted until 2025-11-22 07:39:16.300720306 +0000 UTC m=+1755.141342932 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5") : error deleting /var/lib/kubelet/pods/8084cec1-a543-4ad8-814a-d907ee68e2d5/volume-subpaths: remove /var/lib/kubelet/pods/8084cec1-a543-4ad8-814a-d907ee68e2d5/volume-subpaths: no such file or directory Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.800516 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.813247 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfjhl\" (UniqueName: \"kubernetes.io/projected/8084cec1-a543-4ad8-814a-d907ee68e2d5-kube-api-access-sfjhl\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.813304 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:15 crc kubenswrapper[4853]: I1122 07:39:15.813315 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.073172 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kcrgb" event={"ID":"8084cec1-a543-4ad8-814a-d907ee68e2d5","Type":"ContainerDied","Data":"4f3e1ed5374a7d2ef6177c837f7175625ec62d66fafef792adcdc929f6bcc669"} Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.073300 4853 scope.go:117] "RemoveContainer" containerID="014547175eb62a7b6be53dbcd8652831b7f8bbb143d5df95f322fea9c6a8f14b" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.073525 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kcrgb" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.327780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.328448 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") pod \"8084cec1-a543-4ad8-814a-d907ee68e2d5\" (UID: \"8084cec1-a543-4ad8-814a-d907ee68e2d5\") " Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.328966 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.329728 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.330467 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config" (OuterVolumeSpecName: "config") pod "8084cec1-a543-4ad8-814a-d907ee68e2d5" (UID: "8084cec1-a543-4ad8-814a-d907ee68e2d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.420170 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.431982 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kcrgb"] Nov 22 07:39:16 crc kubenswrapper[4853]: I1122 07:39:16.432508 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8084cec1-a543-4ad8-814a-d907ee68e2d5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:17 crc kubenswrapper[4853]: I1122 07:39:17.485802 4853 scope.go:117] "RemoveContainer" containerID="226128eec8da970656512a0f9cc960e027fb45e283aa1bff058fa09121b2498a" Nov 22 07:39:17 crc kubenswrapper[4853]: I1122 07:39:17.762411 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" path="/var/lib/kubelet/pods/8084cec1-a543-4ad8-814a-d907ee68e2d5/volumes" Nov 22 07:39:21 crc kubenswrapper[4853]: I1122 07:39:21.159515 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 07:39:21 crc kubenswrapper[4853]: I1122 07:39:21.160223 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 07:39:21 crc kubenswrapper[4853]: I1122 07:39:21.384088 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:39:21 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:39:21 crc kubenswrapper[4853]: > Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.184045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gcfs8" event={"ID":"2d4565ad-c87f-4e82-bd22-0218b0598651","Type":"ContainerStarted","Data":"d4e9a8d88064ba66482ee1fc88257879d29418c9f5b3d396f5fc961d7cbac0c6"} Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.186564 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a9dc9521-7d6a-4622-9a63-9c761ff0721c","Type":"ContainerStarted","Data":"6d97838f65c80473f7fbac4f40a122a8ca1c6fc9099ae219e443646fe41759ca"} Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.188683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ef57a60a-7a73-45c6-8760-7e215eedd374","Type":"ContainerStarted","Data":"6c3ec0be1fe3121ca282b333a8883c2dafe0afb5d99eadbc8f940342816b2538"} Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.191374 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8h4v" event={"ID":"7268d91f-27a0-45a1-8239-b6bdc8736b4b","Type":"ContainerStarted","Data":"8d28e2d5395961601f1e1e0330aac5883c7550b29d4622f26e5a45390303f622"} Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.220250 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=50.236436343 podStartE2EDuration="3m15.220216363s" podCreationTimestamp="2025-11-22 07:36:07 +0000 UTC" firstStartedPulling="2025-11-22 07:36:55.935330994 +0000 UTC m=+1614.775953620" lastFinishedPulling="2025-11-22 07:39:20.919111014 +0000 UTC m=+1759.759733640" observedRunningTime="2025-11-22 07:39:22.214788989 +0000 UTC m=+1761.055411625" watchObservedRunningTime="2025-11-22 07:39:22.220216363 +0000 UTC m=+1761.060838999" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.261539 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-b8h4v" podStartSLOduration=34.360271203 podStartE2EDuration="47.261501457s" podCreationTimestamp="2025-11-22 07:38:35 +0000 UTC" firstStartedPulling="2025-11-22 07:39:02.544053368 +0000 UTC m=+1741.384675994" lastFinishedPulling="2025-11-22 07:39:15.445283622 +0000 UTC m=+1754.285906248" observedRunningTime="2025-11-22 07:39:22.246119919 +0000 UTC m=+1761.086742545" watchObservedRunningTime="2025-11-22 07:39:22.261501457 +0000 UTC m=+1761.102124093" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.403978 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:22 crc kubenswrapper[4853]: E1122 07:39:22.404521 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="init" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.404543 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="init" Nov 22 07:39:22 crc kubenswrapper[4853]: E1122 07:39:22.404565 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.404572 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: E1122 07:39:22.404601 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="init" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.404612 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="init" Nov 22 07:39:22 crc kubenswrapper[4853]: E1122 07:39:22.404623 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.404632 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.404981 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec7d4ab7-a342-4408-b10a-8ac8a59e3e61" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.405022 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8084cec1-a543-4ad8-814a-d907ee68e2d5" containerName="dnsmasq-dns" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.406939 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.420525 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.490215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.490319 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbg8\" (UniqueName: \"kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.490380 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.593538 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.593646 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgbg8\" (UniqueName: \"kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.593765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.594118 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.594261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.622015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgbg8\" (UniqueName: \"kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8\") pod \"certified-operators-g6crm\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:22 crc kubenswrapper[4853]: I1122 07:39:22.731958 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.268612 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gcfs8" podStartSLOduration=7.832342817 podStartE2EDuration="1m10.268588093s" podCreationTimestamp="2025-11-22 07:38:14 +0000 UTC" firstStartedPulling="2025-11-22 07:38:18.347490402 +0000 UTC m=+1697.188113018" lastFinishedPulling="2025-11-22 07:39:20.783735668 +0000 UTC m=+1759.624358294" observedRunningTime="2025-11-22 07:39:24.265899302 +0000 UTC m=+1763.106521928" watchObservedRunningTime="2025-11-22 07:39:24.268588093 +0000 UTC m=+1763.109210729" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.320560 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=48.227350277 podStartE2EDuration="3m14.320529588s" podCreationTimestamp="2025-11-22 07:36:10 +0000 UTC" firstStartedPulling="2025-11-22 07:36:54.118512678 +0000 UTC m=+1612.959135304" lastFinishedPulling="2025-11-22 07:39:20.211691989 +0000 UTC m=+1759.052314615" observedRunningTime="2025-11-22 07:39:24.312443645 +0000 UTC m=+1763.153066291" watchObservedRunningTime="2025-11-22 07:39:24.320529588 +0000 UTC m=+1763.161152214" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.674521 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.676596 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.679134 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.679260 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.680711 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.681251 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9z4lq" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.714712 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.755932 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-scripts\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756272 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756449 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756505 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-config\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.756535 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9xrv\" (UniqueName: \"kubernetes.io/projected/4a68a565-fd46-4cac-a300-2e7489e20c4c-kube-api-access-t9xrv\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-config\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860277 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9xrv\" (UniqueName: \"kubernetes.io/projected/4a68a565-fd46-4cac-a300-2e7489e20c4c-kube-api-access-t9xrv\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860323 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860396 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-scripts\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860423 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.860940 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.861490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-config\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.861645 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4a68a565-fd46-4cac-a300-2e7489e20c4c-scripts\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.868905 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.871126 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.882975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a68a565-fd46-4cac-a300-2e7489e20c4c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:24 crc kubenswrapper[4853]: I1122 07:39:24.885378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9xrv\" (UniqueName: \"kubernetes.io/projected/4a68a565-fd46-4cac-a300-2e7489e20c4c-kube-api-access-t9xrv\") pod \"ovn-northd-0\" (UID: \"4a68a565-fd46-4cac-a300-2e7489e20c4c\") " pod="openstack/ovn-northd-0" Nov 22 07:39:25 crc kubenswrapper[4853]: I1122 07:39:25.014999 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:39:25 crc kubenswrapper[4853]: I1122 07:39:25.221855 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerStarted","Data":"8e8749dd25d8b57e51e1b4ef9317ecadcde4606ab344737ff6cd9ad213c23386"} Nov 22 07:39:25 crc kubenswrapper[4853]: I1122 07:39:25.225015 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerStarted","Data":"191995656bf4f31e2276dad55fca2b424abcadafb5511c17ace128a41f95ec41"} Nov 22 07:39:25 crc kubenswrapper[4853]: I1122 07:39:25.758463 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:39:25 crc kubenswrapper[4853]: E1122 07:39:25.758808 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.237433 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.384260 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.819136 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:27 crc kubenswrapper[4853]: W1122 07:39:27.828667 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3ead9cf_10b5_45ec_82e2_9083c221e150.slice/crio-b0b55301a3adc57fe15007ad7afc608ba662eace2fda2b69d8fd857f98a32118 WatchSource:0}: Error finding container b0b55301a3adc57fe15007ad7afc608ba662eace2fda2b69d8fd857f98a32118: Status 404 returned error can't find the container with id b0b55301a3adc57fe15007ad7afc608ba662eace2fda2b69d8fd857f98a32118 Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.833389 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-78e4-account-create-7mqjx"] Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.838850 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.842051 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.866441 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-78e4-account-create-7mqjx"] Nov 22 07:39:27 crc kubenswrapper[4853]: W1122 07:39:27.935545 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a68a565_fd46_4cac_a300_2e7489e20c4c.slice/crio-45950804031e15dd25da4eacd8d3f56c23bade5208088b1ea3a31c924e27b0dc WatchSource:0}: Error finding container 45950804031e15dd25da4eacd8d3f56c23bade5208088b1ea3a31c924e27b0dc: Status 404 returned error can't find the container with id 45950804031e15dd25da4eacd8d3f56c23bade5208088b1ea3a31c924e27b0dc Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.941553 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.953067 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn2jh\" (UniqueName: \"kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:27 crc kubenswrapper[4853]: I1122 07:39:27.953164 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.060359 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn2jh\" (UniqueName: \"kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.060484 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.061711 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.092030 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn2jh\" (UniqueName: \"kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh\") pod \"glance-78e4-account-create-7mqjx\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.222701 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.268861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerStarted","Data":"1e69b26abe64b015cb361ba34eebc7b316da1c9b062cb16a483c5a1abb852b3d"} Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.272313 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4a68a565-fd46-4cac-a300-2e7489e20c4c","Type":"ContainerStarted","Data":"45950804031e15dd25da4eacd8d3f56c23bade5208088b1ea3a31c924e27b0dc"} Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.276684 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerID="c56fc9de6928234567717db0380d59af58729a7f274dd6902ae9a730f8f6c053" exitCode=0 Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.278686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerDied","Data":"c56fc9de6928234567717db0380d59af58729a7f274dd6902ae9a730f8f6c053"} Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.278738 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerStarted","Data":"b0b55301a3adc57fe15007ad7afc608ba662eace2fda2b69d8fd857f98a32118"} Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.304035 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=49.15424226 podStartE2EDuration="3m24.304003772s" podCreationTimestamp="2025-11-22 07:36:04 +0000 UTC" firstStartedPulling="2025-11-22 07:36:52.166440749 +0000 UTC m=+1611.007063375" lastFinishedPulling="2025-11-22 07:39:27.316202261 +0000 UTC m=+1766.156824887" observedRunningTime="2025-11-22 07:39:28.299853882 +0000 UTC m=+1767.140476518" watchObservedRunningTime="2025-11-22 07:39:28.304003772 +0000 UTC m=+1767.144626398" Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.838018 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-78e4-account-create-7mqjx"] Nov 22 07:39:28 crc kubenswrapper[4853]: I1122 07:39:28.998548 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:29 crc kubenswrapper[4853]: I1122 07:39:29.131167 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 07:39:29 crc kubenswrapper[4853]: I1122 07:39:29.317977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78e4-account-create-7mqjx" event={"ID":"98f11ce6-a3d1-43d6-b94b-36c0b37e1959","Type":"ContainerStarted","Data":"ba2ee58cccfe4bfff8eecc7c72c2a0c42def80d00bd4c58d443d1db0f2af54dd"} Nov 22 07:39:29 crc kubenswrapper[4853]: I1122 07:39:29.318972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78e4-account-create-7mqjx" event={"ID":"98f11ce6-a3d1-43d6-b94b-36c0b37e1959","Type":"ContainerStarted","Data":"38d70bf75cbb681dcd2ff89ac4b92920d4ce3ceb5a64153ca0081f5f632c2b32"} Nov 22 07:39:29 crc kubenswrapper[4853]: I1122 07:39:29.386549 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-78e4-account-create-7mqjx" podStartSLOduration=2.386507701 podStartE2EDuration="2.386507701s" podCreationTimestamp="2025-11-22 07:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:29.33964348 +0000 UTC m=+1768.180266116" watchObservedRunningTime="2025-11-22 07:39:29.386507701 +0000 UTC m=+1768.227130337" Nov 22 07:39:30 crc kubenswrapper[4853]: I1122 07:39:30.341702 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerID="9514c7a722eaeaf5b2f0650875dd3e644de6deee2140bdd8adc5cd8724902bf8" exitCode=0 Nov 22 07:39:30 crc kubenswrapper[4853]: I1122 07:39:30.341922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerDied","Data":"9514c7a722eaeaf5b2f0650875dd3e644de6deee2140bdd8adc5cd8724902bf8"} Nov 22 07:39:30 crc kubenswrapper[4853]: I1122 07:39:30.352214 4853 generic.go:334] "Generic (PLEG): container finished" podID="98f11ce6-a3d1-43d6-b94b-36c0b37e1959" containerID="ba2ee58cccfe4bfff8eecc7c72c2a0c42def80d00bd4c58d443d1db0f2af54dd" exitCode=0 Nov 22 07:39:30 crc kubenswrapper[4853]: I1122 07:39:30.352291 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78e4-account-create-7mqjx" event={"ID":"98f11ce6-a3d1-43d6-b94b-36c0b37e1959","Type":"ContainerDied","Data":"ba2ee58cccfe4bfff8eecc7c72c2a0c42def80d00bd4c58d443d1db0f2af54dd"} Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.262937 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.375013 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:39:31 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:39:31 crc kubenswrapper[4853]: > Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.742303 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.794075 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98f11ce6-a3d1-43d6-b94b-36c0b37e1959" (UID: "98f11ce6-a3d1-43d6-b94b-36c0b37e1959"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.793030 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts\") pod \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.794326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn2jh\" (UniqueName: \"kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh\") pod \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\" (UID: \"98f11ce6-a3d1-43d6-b94b-36c0b37e1959\") " Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.798043 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.805823 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh" (OuterVolumeSpecName: "kube-api-access-qn2jh") pod "98f11ce6-a3d1-43d6-b94b-36c0b37e1959" (UID: "98f11ce6-a3d1-43d6-b94b-36c0b37e1959"). InnerVolumeSpecName "kube-api-access-qn2jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.889959 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-jc942"] Nov 22 07:39:31 crc kubenswrapper[4853]: E1122 07:39:31.890909 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f11ce6-a3d1-43d6-b94b-36c0b37e1959" containerName="mariadb-account-create" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.890988 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f11ce6-a3d1-43d6-b94b-36c0b37e1959" containerName="mariadb-account-create" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.891319 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f11ce6-a3d1-43d6-b94b-36c0b37e1959" containerName="mariadb-account-create" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.892260 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jc942" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.903242 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn2jh\" (UniqueName: \"kubernetes.io/projected/98f11ce6-a3d1-43d6-b94b-36c0b37e1959-kube-api-access-qn2jh\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.903399 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jc942"] Nov 22 07:39:31 crc kubenswrapper[4853]: I1122 07:39:31.996925 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b744-account-create-frzkl"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.001015 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.006428 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lr28\" (UniqueName: \"kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.006901 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.011426 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b744-account-create-frzkl"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.013131 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.110423 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpxwp\" (UniqueName: \"kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.110648 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.110683 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lr28\" (UniqueName: \"kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.110814 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.112162 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.134291 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lr28\" (UniqueName: \"kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28\") pod \"keystone-db-create-jc942\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.185162 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-r5xl6"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.187440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.215868 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rnmj\" (UniqueName: \"kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.216264 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.216521 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jc942" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.216668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.216988 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpxwp\" (UniqueName: \"kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.218089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.219143 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r5xl6"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.241129 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpxwp\" (UniqueName: \"kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp\") pod \"keystone-b744-account-create-frzkl\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.310036 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2ac2-account-create-w5tng"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.312217 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.316109 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.321425 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.321712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rnmj\" (UniqueName: \"kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.323253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.324002 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2ac2-account-create-w5tng"] Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.335326 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.425128 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n24hb\" (UniqueName: \"kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.426993 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78e4-account-create-7mqjx" event={"ID":"98f11ce6-a3d1-43d6-b94b-36c0b37e1959","Type":"ContainerDied","Data":"38d70bf75cbb681dcd2ff89ac4b92920d4ce3ceb5a64153ca0081f5f632c2b32"} Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.427041 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78e4-account-create-7mqjx" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.427055 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38d70bf75cbb681dcd2ff89ac4b92920d4ce3ceb5a64153ca0081f5f632c2b32" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.447463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rnmj\" (UniqueName: \"kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj\") pod \"placement-db-create-r5xl6\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.449701 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.538731 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.552831 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n24hb\" (UniqueName: \"kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.553132 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.554546 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.581541 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n24hb\" (UniqueName: \"kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb\") pod \"placement-2ac2-account-create-w5tng\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:32 crc kubenswrapper[4853]: I1122 07:39:32.767269 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:32.973145 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b744-account-create-frzkl"] Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:32.988444 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-jc942"] Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.477691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerStarted","Data":"db9973b3e3d2b327ce59afd23637406d3b7d1f71698fbbb6b75902f617d0ec61"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.482219 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jc942" event={"ID":"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb","Type":"ContainerStarted","Data":"c56bc0f7a3e01da1e126685687068ae95a7fa6233935a548510e48150204a7f1"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.488863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b744-account-create-frzkl" event={"ID":"26080cb8-1363-43b9-aec1-e84e5bd13de2","Type":"ContainerStarted","Data":"0c5985e0d9cacb66de02c82c7902d79f05025df1c397660c39e6d493599897e8"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.488955 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b744-account-create-frzkl" event={"ID":"26080cb8-1363-43b9-aec1-e84e5bd13de2","Type":"ContainerStarted","Data":"27bdd5779298bc6af02ca011917570345c0d87f55a02ebdfa756be3d3b14d2bc"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.500017 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4a68a565-fd46-4cac-a300-2e7489e20c4c","Type":"ContainerStarted","Data":"effcfe6aafc768bee59424fad73ac80a0b8ad1d16bdd441702d8c44c30a06516"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.500086 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4a68a565-fd46-4cac-a300-2e7489e20c4c","Type":"ContainerStarted","Data":"6e9fb9200622f25e36fee9f05a72d1fae4b98a641789e6846003ac11a13140e7"} Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.500828 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.515109 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g6crm" podStartSLOduration=7.650701347 podStartE2EDuration="11.515080504s" podCreationTimestamp="2025-11-22 07:39:22 +0000 UTC" firstStartedPulling="2025-11-22 07:39:28.279938234 +0000 UTC m=+1767.120560860" lastFinishedPulling="2025-11-22 07:39:32.144317391 +0000 UTC m=+1770.984940017" observedRunningTime="2025-11-22 07:39:33.513186525 +0000 UTC m=+1772.353809161" watchObservedRunningTime="2025-11-22 07:39:33.515080504 +0000 UTC m=+1772.355703130" Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.545589 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.412208251 podStartE2EDuration="9.545542471s" podCreationTimestamp="2025-11-22 07:39:24 +0000 UTC" firstStartedPulling="2025-11-22 07:39:27.940517525 +0000 UTC m=+1766.781140161" lastFinishedPulling="2025-11-22 07:39:32.073851755 +0000 UTC m=+1770.914474381" observedRunningTime="2025-11-22 07:39:33.537391535 +0000 UTC m=+1772.378014161" watchObservedRunningTime="2025-11-22 07:39:33.545542471 +0000 UTC m=+1772.386165098" Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.572432 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b744-account-create-frzkl" podStartSLOduration=2.572396443 podStartE2EDuration="2.572396443s" podCreationTimestamp="2025-11-22 07:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:33.558635178 +0000 UTC m=+1772.399257814" watchObservedRunningTime="2025-11-22 07:39:33.572396443 +0000 UTC m=+1772.413019089" Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.594105 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-jc942" podStartSLOduration=2.594068707 podStartE2EDuration="2.594068707s" podCreationTimestamp="2025-11-22 07:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:33.576351577 +0000 UTC m=+1772.416974203" watchObservedRunningTime="2025-11-22 07:39:33.594068707 +0000 UTC m=+1772.434691333" Nov 22 07:39:33 crc kubenswrapper[4853]: W1122 07:39:33.597810 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cdfe3e8_bc06_4691_86c0_4e409315cdf9.slice/crio-0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486 WatchSource:0}: Error finding container 0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486: Status 404 returned error can't find the container with id 0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486 Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.616007 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-r5xl6"] Nov 22 07:39:33 crc kubenswrapper[4853]: I1122 07:39:33.666468 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2ac2-account-create-w5tng"] Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.023128 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.308280 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-95nmd"] Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.310308 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.329077 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-95nmd"] Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.421869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd67l\" (UniqueName: \"kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.422020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.519883 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4d23-account-create-4kkcm"] Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.523114 4853 generic.go:334] "Generic (PLEG): container finished" podID="e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" containerID="858d25750ba3ff9ba2a3753104d46f1cc3dc01dec156fc976aa58abfa2866e57" exitCode=0 Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.523170 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.523205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jc942" event={"ID":"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb","Type":"ContainerDied","Data":"858d25750ba3ff9ba2a3753104d46f1cc3dc01dec156fc976aa58abfa2866e57"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.526030 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.526402 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.526956 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd67l\" (UniqueName: \"kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.527665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.530285 4853 generic.go:334] "Generic (PLEG): container finished" podID="c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" containerID="6b38466e08bc82ab301d3bcd89270010578572c8ce2a09a55a55fe14de1dfcd6" exitCode=0 Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.530504 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ac2-account-create-w5tng" event={"ID":"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23","Type":"ContainerDied","Data":"6b38466e08bc82ab301d3bcd89270010578572c8ce2a09a55a55fe14de1dfcd6"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.531171 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ac2-account-create-w5tng" event={"ID":"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23","Type":"ContainerStarted","Data":"029c263fc9248d7273d71bb5541e3e6306c790662eb89f9042618820b57cad50"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.537543 4853 generic.go:334] "Generic (PLEG): container finished" podID="26080cb8-1363-43b9-aec1-e84e5bd13de2" containerID="0c5985e0d9cacb66de02c82c7902d79f05025df1c397660c39e6d493599897e8" exitCode=0 Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.537629 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b744-account-create-frzkl" event={"ID":"26080cb8-1363-43b9-aec1-e84e5bd13de2","Type":"ContainerDied","Data":"0c5985e0d9cacb66de02c82c7902d79f05025df1c397660c39e6d493599897e8"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.537718 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4d23-account-create-4kkcm"] Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.541823 4853 generic.go:334] "Generic (PLEG): container finished" podID="5cdfe3e8-bc06-4691-86c0-4e409315cdf9" containerID="c45a1959c0e414dafd26f5e73e4e601d528c37af4e01192bfb61c2212b349250" exitCode=0 Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.541945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r5xl6" event={"ID":"5cdfe3e8-bc06-4691-86c0-4e409315cdf9","Type":"ContainerDied","Data":"c45a1959c0e414dafd26f5e73e4e601d528c37af4e01192bfb61c2212b349250"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.542016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r5xl6" event={"ID":"5cdfe3e8-bc06-4691-86c0-4e409315cdf9","Type":"ContainerStarted","Data":"0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486"} Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.558384 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd67l\" (UniqueName: \"kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l\") pod \"mysqld-exporter-openstack-db-create-95nmd\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.629550 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.629694 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2nx4\" (UniqueName: \"kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.674461 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.733064 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2nx4\" (UniqueName: \"kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.733307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.734957 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.763711 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2nx4\" (UniqueName: \"kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4\") pod \"mysqld-exporter-4d23-account-create-4kkcm\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:34 crc kubenswrapper[4853]: I1122 07:39:34.851668 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.223333 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-95nmd"] Nov 22 07:39:35 crc kubenswrapper[4853]: W1122 07:39:35.394401 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f8fbb2_9f4f_48a1_bbe6_ae11d68fc2cc.slice/crio-a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1 WatchSource:0}: Error finding container a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1: Status 404 returned error can't find the container with id a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1 Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.400020 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4d23-account-create-4kkcm"] Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.557923 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" event={"ID":"f4476ce0-7ffe-489f-a7b8-8375a7980bfb","Type":"ContainerStarted","Data":"30ab0495e2d9f69f354426b7dc389f20d3977878da8e9472df6281ab2eff70b7"} Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.558440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" event={"ID":"f4476ce0-7ffe-489f-a7b8-8375a7980bfb","Type":"ContainerStarted","Data":"c74896b8741b2b89821d4b2df17017e33e2495fdc7f2cc5d0488eb66f79862e0"} Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.559946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" event={"ID":"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc","Type":"ContainerStarted","Data":"a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1"} Nov 22 07:39:35 crc kubenswrapper[4853]: I1122 07:39:35.587595 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" podStartSLOduration=1.587569563 podStartE2EDuration="1.587569563s" podCreationTimestamp="2025-11-22 07:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:35.575594046 +0000 UTC m=+1774.416216662" watchObservedRunningTime="2025-11-22 07:39:35.587569563 +0000 UTC m=+1774.428192189" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.109608 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.122913 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jc942" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.178864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rnmj\" (UniqueName: \"kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj\") pod \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.179724 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts\") pod \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.179850 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lr28\" (UniqueName: \"kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28\") pod \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\" (UID: \"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.180108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts\") pod \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\" (UID: \"5cdfe3e8-bc06-4691-86c0-4e409315cdf9\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.180727 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" (UID: "e59d32a8-a318-40f1-9cfe-f10d7d2f31cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.182135 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.183263 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5cdfe3e8-bc06-4691-86c0-4e409315cdf9" (UID: "5cdfe3e8-bc06-4691-86c0-4e409315cdf9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.191564 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj" (OuterVolumeSpecName: "kube-api-access-2rnmj") pod "5cdfe3e8-bc06-4691-86c0-4e409315cdf9" (UID: "5cdfe3e8-bc06-4691-86c0-4e409315cdf9"). InnerVolumeSpecName "kube-api-access-2rnmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.191953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28" (OuterVolumeSpecName: "kube-api-access-4lr28") pod "e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" (UID: "e59d32a8-a318-40f1-9cfe-f10d7d2f31cb"). InnerVolumeSpecName "kube-api-access-4lr28". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: E1122 07:39:36.225670 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f8fbb2_9f4f_48a1_bbe6_ae11d68fc2cc.slice/crio-21963dd73268b70a39b031054ea8b79de2b654cb57d978be93e84267f53e0e9b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99f8fbb2_9f4f_48a1_bbe6_ae11d68fc2cc.slice/crio-conmon-21963dd73268b70a39b031054ea8b79de2b654cb57d978be93e84267f53e0e9b.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.262992 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.267420 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.271090 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.277971 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.289932 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lr28\" (UniqueName: \"kubernetes.io/projected/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb-kube-api-access-4lr28\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.289978 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.289989 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rnmj\" (UniqueName: \"kubernetes.io/projected/5cdfe3e8-bc06-4691-86c0-4e409315cdf9-kube-api-access-2rnmj\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.392132 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpxwp\" (UniqueName: \"kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp\") pod \"26080cb8-1363-43b9-aec1-e84e5bd13de2\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.392272 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts\") pod \"26080cb8-1363-43b9-aec1-e84e5bd13de2\" (UID: \"26080cb8-1363-43b9-aec1-e84e5bd13de2\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.392458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts\") pod \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.392613 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n24hb\" (UniqueName: \"kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb\") pod \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\" (UID: \"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23\") " Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.393322 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26080cb8-1363-43b9-aec1-e84e5bd13de2" (UID: "26080cb8-1363-43b9-aec1-e84e5bd13de2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.393559 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" (UID: "c31a521c-9c4a-40fd-b320-4ebb0ff0fa23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.394484 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.394523 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26080cb8-1363-43b9-aec1-e84e5bd13de2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.398863 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp" (OuterVolumeSpecName: "kube-api-access-jpxwp") pod "26080cb8-1363-43b9-aec1-e84e5bd13de2" (UID: "26080cb8-1363-43b9-aec1-e84e5bd13de2"). InnerVolumeSpecName "kube-api-access-jpxwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.399276 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb" (OuterVolumeSpecName: "kube-api-access-n24hb") pod "c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" (UID: "c31a521c-9c4a-40fd-b320-4ebb0ff0fa23"). InnerVolumeSpecName "kube-api-access-n24hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.496792 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n24hb\" (UniqueName: \"kubernetes.io/projected/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23-kube-api-access-n24hb\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.496891 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpxwp\" (UniqueName: \"kubernetes.io/projected/26080cb8-1363-43b9-aec1-e84e5bd13de2-kube-api-access-jpxwp\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.572326 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-jc942" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.572550 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-jc942" event={"ID":"e59d32a8-a318-40f1-9cfe-f10d7d2f31cb","Type":"ContainerDied","Data":"c56bc0f7a3e01da1e126685687068ae95a7fa6233935a548510e48150204a7f1"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.573004 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c56bc0f7a3e01da1e126685687068ae95a7fa6233935a548510e48150204a7f1" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.574445 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4476ce0-7ffe-489f-a7b8-8375a7980bfb" containerID="30ab0495e2d9f69f354426b7dc389f20d3977878da8e9472df6281ab2eff70b7" exitCode=0 Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.574551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" event={"ID":"f4476ce0-7ffe-489f-a7b8-8375a7980bfb","Type":"ContainerDied","Data":"30ab0495e2d9f69f354426b7dc389f20d3977878da8e9472df6281ab2eff70b7"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.599529 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2ac2-account-create-w5tng" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.599638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2ac2-account-create-w5tng" event={"ID":"c31a521c-9c4a-40fd-b320-4ebb0ff0fa23","Type":"ContainerDied","Data":"029c263fc9248d7273d71bb5541e3e6306c790662eb89f9042618820b57cad50"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.599723 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029c263fc9248d7273d71bb5541e3e6306c790662eb89f9042618820b57cad50" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.606699 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b744-account-create-frzkl" event={"ID":"26080cb8-1363-43b9-aec1-e84e5bd13de2","Type":"ContainerDied","Data":"27bdd5779298bc6af02ca011917570345c0d87f55a02ebdfa756be3d3b14d2bc"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.606793 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27bdd5779298bc6af02ca011917570345c0d87f55a02ebdfa756be3d3b14d2bc" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.606789 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b744-account-create-frzkl" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.610507 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-r5xl6" event={"ID":"5cdfe3e8-bc06-4691-86c0-4e409315cdf9","Type":"ContainerDied","Data":"0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.610549 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-r5xl6" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.610559 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0241c849888d5693014525e5305bfdc119ffa14df0c6860801c387babc31c486" Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.612697 4853 generic.go:334] "Generic (PLEG): container finished" podID="99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" containerID="21963dd73268b70a39b031054ea8b79de2b654cb57d978be93e84267f53e0e9b" exitCode=0 Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.612905 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" event={"ID":"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc","Type":"ContainerDied","Data":"21963dd73268b70a39b031054ea8b79de2b654cb57d978be93e84267f53e0e9b"} Nov 22 07:39:36 crc kubenswrapper[4853]: I1122 07:39:36.615490 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.450192 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-l7f52"] Nov 22 07:39:37 crc kubenswrapper[4853]: E1122 07:39:37.451013 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.451108 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: E1122 07:39:37.451206 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.451274 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: E1122 07:39:37.451379 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26080cb8-1363-43b9-aec1-e84e5bd13de2" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.451451 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="26080cb8-1363-43b9-aec1-e84e5bd13de2" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: E1122 07:39:37.451568 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdfe3e8-bc06-4691-86c0-4e409315cdf9" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.451643 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdfe3e8-bc06-4691-86c0-4e409315cdf9" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.452021 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.452133 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.452254 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="26080cb8-1363-43b9-aec1-e84e5bd13de2" containerName="mariadb-account-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.452337 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdfe3e8-bc06-4691-86c0-4e409315cdf9" containerName="mariadb-database-create" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.453521 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.465254 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l7f52"] Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.538074 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9dcj\" (UniqueName: \"kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.538369 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.624411 4853 generic.go:334] "Generic (PLEG): container finished" podID="7268d91f-27a0-45a1-8239-b6bdc8736b4b" containerID="8d28e2d5395961601f1e1e0330aac5883c7550b29d4622f26e5a45390303f622" exitCode=0 Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.624588 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8h4v" event={"ID":"7268d91f-27a0-45a1-8239-b6bdc8736b4b","Type":"ContainerDied","Data":"8d28e2d5395961601f1e1e0330aac5883c7550b29d4622f26e5a45390303f622"} Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.641350 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.641583 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9dcj\" (UniqueName: \"kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.642289 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.676408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9dcj\" (UniqueName: \"kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj\") pod \"glance-db-create-l7f52\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " pod="openstack/glance-db-create-l7f52" Nov 22 07:39:37 crc kubenswrapper[4853]: I1122 07:39:37.777047 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l7f52" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.081485 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.161274 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2nx4\" (UniqueName: \"kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4\") pod \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.161420 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts\") pod \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\" (UID: \"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc\") " Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.162654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" (UID: "99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.172352 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4" (OuterVolumeSpecName: "kube-api-access-r2nx4") pod "99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" (UID: "99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc"). InnerVolumeSpecName "kube-api-access-r2nx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.244276 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.265683 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2nx4\" (UniqueName: \"kubernetes.io/projected/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-kube-api-access-r2nx4\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.265729 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.367343 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts\") pod \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.367451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd67l\" (UniqueName: \"kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l\") pod \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\" (UID: \"f4476ce0-7ffe-489f-a7b8-8375a7980bfb\") " Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.368303 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4476ce0-7ffe-489f-a7b8-8375a7980bfb" (UID: "f4476ce0-7ffe-489f-a7b8-8375a7980bfb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.375334 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l" (OuterVolumeSpecName: "kube-api-access-hd67l") pod "f4476ce0-7ffe-489f-a7b8-8375a7980bfb" (UID: "f4476ce0-7ffe-489f-a7b8-8375a7980bfb"). InnerVolumeSpecName "kube-api-access-hd67l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.440100 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-l7f52"] Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.470142 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.470185 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd67l\" (UniqueName: \"kubernetes.io/projected/f4476ce0-7ffe-489f-a7b8-8375a7980bfb-kube-api-access-hd67l\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.636301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l7f52" event={"ID":"015c5e49-7907-4c6c-a3b3-7416c2bdefad","Type":"ContainerStarted","Data":"3f87a88eb780e725d1f96ef4ad725d5c0fec60af81884938bc2fd88d3e1897ca"} Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.638767 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" event={"ID":"f4476ce0-7ffe-489f-a7b8-8375a7980bfb","Type":"ContainerDied","Data":"c74896b8741b2b89821d4b2df17017e33e2495fdc7f2cc5d0488eb66f79862e0"} Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.638817 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c74896b8741b2b89821d4b2df17017e33e2495fdc7f2cc5d0488eb66f79862e0" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.638780 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-95nmd" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.641375 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.644844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4d23-account-create-4kkcm" event={"ID":"99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc","Type":"ContainerDied","Data":"a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1"} Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.644927 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4e8b960283043f0d3d8967d67bba86479c71501121c4e58630cbefe626470a1" Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.990417 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-nhs2x" podUID="05c9113f-59ff-46cc-b704-eb9c8553ad37" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:39:38 crc kubenswrapper[4853]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 07:39:38 crc kubenswrapper[4853]: > Nov 22 07:39:38 crc kubenswrapper[4853]: I1122 07:39:38.997983 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k99wz" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.188826 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.217185 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-nhs2x-config-g8w2v"] Nov 22 07:39:39 crc kubenswrapper[4853]: E1122 07:39:39.217929 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4476ce0-7ffe-489f-a7b8-8375a7980bfb" containerName="mariadb-database-create" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.217952 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4476ce0-7ffe-489f-a7b8-8375a7980bfb" containerName="mariadb-database-create" Nov 22 07:39:39 crc kubenswrapper[4853]: E1122 07:39:39.217995 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" containerName="mariadb-account-create" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.218013 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" containerName="mariadb-account-create" Nov 22 07:39:39 crc kubenswrapper[4853]: E1122 07:39:39.218049 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7268d91f-27a0-45a1-8239-b6bdc8736b4b" containerName="swift-ring-rebalance" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.218058 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7268d91f-27a0-45a1-8239-b6bdc8736b4b" containerName="swift-ring-rebalance" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.218267 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" containerName="mariadb-account-create" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.218300 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4476ce0-7ffe-489f-a7b8-8375a7980bfb" containerName="mariadb-database-create" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.218312 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7268d91f-27a0-45a1-8239-b6bdc8736b4b" containerName="swift-ring-rebalance" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.219284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.266809 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.292722 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nhs2x-config-g8w2v"] Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.302620 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.302714 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.302772 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.302876 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303068 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nts4t\" (UniqueName: \"kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303112 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf\") pod \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\" (UID: \"7268d91f-27a0-45a1-8239-b6bdc8736b4b\") " Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303413 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303477 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303525 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtvkd\" (UniqueName: \"kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.303623 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.305938 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.308877 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.330553 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t" (OuterVolumeSpecName: "kube-api-access-nts4t") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "kube-api-access-nts4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.355650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts" (OuterVolumeSpecName: "scripts") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.389068 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.392419 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.407857 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.407946 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.407993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408146 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtvkd\" (UniqueName: \"kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408242 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408363 4853 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408387 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408401 4853 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7268d91f-27a0-45a1-8239-b6bdc8736b4b-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408413 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7268d91f-27a0-45a1-8239-b6bdc8736b4b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408430 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nts4t\" (UniqueName: \"kubernetes.io/projected/7268d91f-27a0-45a1-8239-b6bdc8736b4b-kube-api-access-nts4t\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408444 4853 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408600 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.408778 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7268d91f-27a0-45a1-8239-b6bdc8736b4b" (UID: "7268d91f-27a0-45a1-8239-b6bdc8736b4b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.409035 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.411707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.416791 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d4427668-9ef6-4594-ae35-ff983a6af324-etc-swift\") pod \"swift-storage-0\" (UID: \"d4427668-9ef6-4594-ae35-ff983a6af324\") " pod="openstack/swift-storage-0" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.430493 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtvkd\" (UniqueName: \"kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd\") pod \"ovn-controller-nhs2x-config-g8w2v\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.510588 4853 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7268d91f-27a0-45a1-8239-b6bdc8736b4b-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.597242 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.610713 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.677244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-b8h4v" event={"ID":"7268d91f-27a0-45a1-8239-b6bdc8736b4b","Type":"ContainerDied","Data":"a081c59efa267223ceef61e5662fce9ca7ee6184314403fd881c467b9ec46d1f"} Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.683871 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a081c59efa267223ceef61e5662fce9ca7ee6184314403fd881c467b9ec46d1f" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.678122 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-b8h4v" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.692349 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l7f52" event={"ID":"015c5e49-7907-4c6c-a3b3-7416c2bdefad","Type":"ContainerStarted","Data":"a8e46def419b0227cc97075609db3892a84c5f816682899f0cbfdcb44cc92483"} Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.752455 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:39:39 crc kubenswrapper[4853]: E1122 07:39:39.752788 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.759311 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-l7f52" podStartSLOduration=2.759283669 podStartE2EDuration="2.759283669s" podCreationTimestamp="2025-11-22 07:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:39.727416055 +0000 UTC m=+1778.568038691" watchObservedRunningTime="2025-11-22 07:39:39.759283669 +0000 UTC m=+1778.599906295" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.793309 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn"] Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.797141 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.846598 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.847471 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn"] Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.856963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvtd\" (UniqueName: \"kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.894920 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.895304 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="prometheus" containerID="cri-o://c730856d57cddcca52937e3cd3260af7023f4f1504c63c6735ab861b2b8563c7" gracePeriod=600 Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.895456 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="thanos-sidecar" containerID="cri-o://1e69b26abe64b015cb361ba34eebc7b316da1c9b062cb16a483c5a1abb852b3d" gracePeriod=600 Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.895497 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="config-reloader" containerID="cri-o://50e730253dded81d2c23de4f531a2e149403209da74286c4428fa65af80c88bb" gracePeriod=600 Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.980864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.981146 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxvtd\" (UniqueName: \"kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:39 crc kubenswrapper[4853]: I1122 07:39:39.982658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.011223 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxvtd\" (UniqueName: \"kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd\") pod \"mysqld-exporter-openstack-cell1-db-create-q4jsn\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.078091 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ce61-account-create-qm8lf"] Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.080067 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.083868 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.097892 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ce61-account-create-qm8lf"] Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.186412 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5tm\" (UniqueName: \"kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.187041 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.192778 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.286986 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-nhs2x-config-g8w2v"] Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.290362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.290573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5tm\" (UniqueName: \"kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.291896 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.312262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5tm\" (UniqueName: \"kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm\") pod \"mysqld-exporter-ce61-account-create-qm8lf\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.462695 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.567627 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.710333 4853 generic.go:334] "Generic (PLEG): container finished" podID="015c5e49-7907-4c6c-a3b3-7416c2bdefad" containerID="a8e46def419b0227cc97075609db3892a84c5f816682899f0cbfdcb44cc92483" exitCode=0 Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.710391 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l7f52" event={"ID":"015c5e49-7907-4c6c-a3b3-7416c2bdefad","Type":"ContainerDied","Data":"a8e46def419b0227cc97075609db3892a84c5f816682899f0cbfdcb44cc92483"} Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.721893 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nhs2x-config-g8w2v" event={"ID":"08831ffc-023d-47cc-aee4-7ad879ef7ace","Type":"ContainerStarted","Data":"bc82b125de746e0bea411edc1935d18aae261c72c941671886267dcfadd589f5"} Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.725307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"b01b08b1ba53dd6a928f4b4f99f0d5b817b3b0693f6d556624170cb6d7383bc5"} Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743111 4853 generic.go:334] "Generic (PLEG): container finished" podID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerID="1e69b26abe64b015cb361ba34eebc7b316da1c9b062cb16a483c5a1abb852b3d" exitCode=0 Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743160 4853 generic.go:334] "Generic (PLEG): container finished" podID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerID="50e730253dded81d2c23de4f531a2e149403209da74286c4428fa65af80c88bb" exitCode=0 Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743173 4853 generic.go:334] "Generic (PLEG): container finished" podID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerID="c730856d57cddcca52937e3cd3260af7023f4f1504c63c6735ab861b2b8563c7" exitCode=0 Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerDied","Data":"1e69b26abe64b015cb361ba34eebc7b316da1c9b062cb16a483c5a1abb852b3d"} Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerDied","Data":"50e730253dded81d2c23de4f531a2e149403209da74286c4428fa65af80c88bb"} Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.743257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerDied","Data":"c730856d57cddcca52937e3cd3260af7023f4f1504c63c6735ab861b2b8563c7"} Nov 22 07:39:40 crc kubenswrapper[4853]: W1122 07:39:40.796388 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67c6486e_03d6_4215_831c_c87eac890517.slice/crio-89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0 WatchSource:0}: Error finding container 89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0: Status 404 returned error can't find the container with id 89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0 Nov 22 07:39:40 crc kubenswrapper[4853]: I1122 07:39:40.796432 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn"] Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.024381 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ce61-account-create-qm8lf"] Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.179541 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.320885 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clglx\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321327 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321363 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321521 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321566 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321917 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.321990 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets\") pod \"21a745c3-d66b-447a-bf7e-386ac88bb05f\" (UID: \"21a745c3-d66b-447a-bf7e-386ac88bb05f\") " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.323346 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.336013 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.337992 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out" (OuterVolumeSpecName: "config-out") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.338011 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config" (OuterVolumeSpecName: "config") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.338180 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx" (OuterVolumeSpecName: "kube-api-access-clglx") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "kube-api-access-clglx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.340953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.369102 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config" (OuterVolumeSpecName: "web-config") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.374933 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "21a745c3-d66b-447a-bf7e-386ac88bb05f" (UID: "21a745c3-d66b-447a-bf7e-386ac88bb05f"). InnerVolumeSpecName "pvc-73f142d7-70c2-4362-8972-074d65aa68e0". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425249 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tv8h9" podUID="cae818e5-34d5-43c7-95af-e82e21309758" containerName="registry-server" probeResult="failure" output=< Nov 22 07:39:41 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:39:41 crc kubenswrapper[4853]: > Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425746 4853 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/21a745c3-d66b-447a-bf7e-386ac88bb05f-config-out\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425826 4853 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-web-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425882 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") on node \"crc\" " Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425901 4853 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425918 4853 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425935 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clglx\" (UniqueName: \"kubernetes.io/projected/21a745c3-d66b-447a-bf7e-386ac88bb05f-kube-api-access-clglx\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425955 4853 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/21a745c3-d66b-447a-bf7e-386ac88bb05f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.425974 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/21a745c3-d66b-447a-bf7e-386ac88bb05f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.472902 4853 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.473814 4853 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-73f142d7-70c2-4362-8972-074d65aa68e0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0") on node "crc" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.528647 4853 reconciler_common.go:293] "Volume detached for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.768057 4853 generic.go:334] "Generic (PLEG): container finished" podID="08831ffc-023d-47cc-aee4-7ad879ef7ace" containerID="3a950439bcaa64345b6de77d8957b914c37655297cd2e5c8f9d29e7dbc2896c4" exitCode=0 Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.773921 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nhs2x-config-g8w2v" event={"ID":"08831ffc-023d-47cc-aee4-7ad879ef7ace","Type":"ContainerDied","Data":"3a950439bcaa64345b6de77d8957b914c37655297cd2e5c8f9d29e7dbc2896c4"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.779084 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" event={"ID":"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd","Type":"ContainerStarted","Data":"74b9c9ca7d062b54b108f2a57237fc92f12fedc9ec490728918a7f4e44519fdc"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.779143 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" event={"ID":"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd","Type":"ContainerStarted","Data":"c2859dde6d28687ef12f484f7849ee4d77658778d6e5a0f0858080092339b66b"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.783390 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"21a745c3-d66b-447a-bf7e-386ac88bb05f","Type":"ContainerDied","Data":"12bb5a88e209af6ba4cdd62a5959708ea9e0b6d437c0df3aeb8f4fa8ae1c3898"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.783499 4853 scope.go:117] "RemoveContainer" containerID="1e69b26abe64b015cb361ba34eebc7b316da1c9b062cb16a483c5a1abb852b3d" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.783974 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.797127 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" event={"ID":"67c6486e-03d6-4215-831c-c87eac890517","Type":"ContainerStarted","Data":"68afc55e3d57a420dba7215df55ae9c9bfd73b9c495a024107265c83a9e48dbd"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.797236 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" event={"ID":"67c6486e-03d6-4215-831c-c87eac890517","Type":"ContainerStarted","Data":"89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0"} Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.814489 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" podStartSLOduration=1.8144600199999998 podStartE2EDuration="1.81446002s" podCreationTimestamp="2025-11-22 07:39:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:39:41.812317523 +0000 UTC m=+1780.652940149" watchObservedRunningTime="2025-11-22 07:39:41.81446002 +0000 UTC m=+1780.655082646" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.917450 4853 scope.go:117] "RemoveContainer" containerID="50e730253dded81d2c23de4f531a2e149403209da74286c4428fa65af80c88bb" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.928968 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.978399 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.981889 4853 scope.go:117] "RemoveContainer" containerID="c730856d57cddcca52937e3cd3260af7023f4f1504c63c6735ab861b2b8563c7" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.995255 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:41 crc kubenswrapper[4853]: E1122 07:39:41.996507 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="init-config-reloader" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.996566 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="init-config-reloader" Nov 22 07:39:41 crc kubenswrapper[4853]: E1122 07:39:41.996603 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="config-reloader" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.996615 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="config-reloader" Nov 22 07:39:41 crc kubenswrapper[4853]: E1122 07:39:41.996674 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="prometheus" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.996684 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="prometheus" Nov 22 07:39:41 crc kubenswrapper[4853]: E1122 07:39:41.996700 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="thanos-sidecar" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.996709 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="thanos-sidecar" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.997138 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="prometheus" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.997181 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="thanos-sidecar" Nov 22 07:39:41 crc kubenswrapper[4853]: I1122 07:39:41.997208 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" containerName="config-reloader" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.000292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.003732 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.004906 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.004931 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.005390 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.005490 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.005908 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.008424 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xlqg2" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.013967 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043466 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043567 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043593 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78a8c429-b429-44e1-be5e-3eb355ae4d54-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78a8c429-b429-44e1-be5e-3eb355ae4d54-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.043701 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.044714 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvqr\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-kube-api-access-dsvqr\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.044758 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.044832 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.045202 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.147783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.147936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.147980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78a8c429-b429-44e1-be5e-3eb355ae4d54-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.148017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.148105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78a8c429-b429-44e1-be5e-3eb355ae4d54-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.148239 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.148330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvqr\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-kube-api-access-dsvqr\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.150022 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.150090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.150172 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.150233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.150658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/78a8c429-b429-44e1-be5e-3eb355ae4d54-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.155055 4853 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.155126 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b9452f72c82fc383fb7f41be861bef3909a820d64fcd2aeadb4aba00c38cb08/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.159328 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.161204 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/78a8c429-b429-44e1-be5e-3eb355ae4d54-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.164284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.164411 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.164581 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-config\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.166253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.166922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.168790 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78a8c429-b429-44e1-be5e-3eb355ae4d54-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.173705 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvqr\" (UniqueName: \"kubernetes.io/projected/78a8c429-b429-44e1-be5e-3eb355ae4d54-kube-api-access-dsvqr\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.202155 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-73f142d7-70c2-4362-8972-074d65aa68e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-73f142d7-70c2-4362-8972-074d65aa68e0\") pod \"prometheus-metric-storage-0\" (UID: \"78a8c429-b429-44e1-be5e-3eb355ae4d54\") " pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.353787 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.732765 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.732839 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.793291 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.810652 4853 generic.go:334] "Generic (PLEG): container finished" podID="3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" containerID="74b9c9ca7d062b54b108f2a57237fc92f12fedc9ec490728918a7f4e44519fdc" exitCode=0 Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.810773 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" event={"ID":"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd","Type":"ContainerDied","Data":"74b9c9ca7d062b54b108f2a57237fc92f12fedc9ec490728918a7f4e44519fdc"} Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.815036 4853 generic.go:334] "Generic (PLEG): container finished" podID="67c6486e-03d6-4215-831c-c87eac890517" containerID="68afc55e3d57a420dba7215df55ae9c9bfd73b9c495a024107265c83a9e48dbd" exitCode=0 Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.815163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" event={"ID":"67c6486e-03d6-4215-831c-c87eac890517","Type":"ContainerDied","Data":"68afc55e3d57a420dba7215df55ae9c9bfd73b9c495a024107265c83a9e48dbd"} Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.866149 4853 scope.go:117] "RemoveContainer" containerID="de9382875e576601d65403d94d2a97424a765bdae93ee92a405d4d66a2d746fd" Nov 22 07:39:42 crc kubenswrapper[4853]: I1122 07:39:42.882534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.043921 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l7f52" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.048344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.076326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts\") pod \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.076423 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9dcj\" (UniqueName: \"kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj\") pod \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\" (UID: \"015c5e49-7907-4c6c-a3b3-7416c2bdefad\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.078879 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "015c5e49-7907-4c6c-a3b3-7416c2bdefad" (UID: "015c5e49-7907-4c6c-a3b3-7416c2bdefad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.092211 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj" (OuterVolumeSpecName: "kube-api-access-g9dcj") pod "015c5e49-7907-4c6c-a3b3-7416c2bdefad" (UID: "015c5e49-7907-4c6c-a3b3-7416c2bdefad"). InnerVolumeSpecName "kube-api-access-g9dcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.178409 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/015c5e49-7907-4c6c-a3b3-7416c2bdefad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.178479 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9dcj\" (UniqueName: \"kubernetes.io/projected/015c5e49-7907-4c6c-a3b3-7416c2bdefad-kube-api-access-g9dcj\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.337201 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.352554 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.418537 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419139 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts\") pod \"67c6486e-03d6-4215-831c-c87eac890517\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419180 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxvtd\" (UniqueName: \"kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd\") pod \"67c6486e-03d6-4215-831c-c87eac890517\" (UID: \"67c6486e-03d6-4215-831c-c87eac890517\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419284 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419337 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419368 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419472 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtvkd\" (UniqueName: \"kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.419527 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts\") pod \"08831ffc-023d-47cc-aee4-7ad879ef7ace\" (UID: \"08831ffc-023d-47cc-aee4-7ad879ef7ace\") " Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.420423 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.420593 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.421381 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.421527 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run" (OuterVolumeSpecName: "var-run") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.421810 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts" (OuterVolumeSpecName: "scripts") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.424716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67c6486e-03d6-4215-831c-c87eac890517" (UID: "67c6486e-03d6-4215-831c-c87eac890517"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.431138 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd" (OuterVolumeSpecName: "kube-api-access-qxvtd") pod "67c6486e-03d6-4215-831c-c87eac890517" (UID: "67c6486e-03d6-4215-831c-c87eac890517"). InnerVolumeSpecName "kube-api-access-qxvtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.436053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd" (OuterVolumeSpecName: "kube-api-access-wtvkd") pod "08831ffc-023d-47cc-aee4-7ad879ef7ace" (UID: "08831ffc-023d-47cc-aee4-7ad879ef7ace"). InnerVolumeSpecName "kube-api-access-wtvkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521318 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521358 4853 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521369 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67c6486e-03d6-4215-831c-c87eac890517-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521382 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxvtd\" (UniqueName: \"kubernetes.io/projected/67c6486e-03d6-4215-831c-c87eac890517-kube-api-access-qxvtd\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521391 4853 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521403 4853 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/08831ffc-023d-47cc-aee4-7ad879ef7ace-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521413 4853 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/08831ffc-023d-47cc-aee4-7ad879ef7ace-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.521422 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtvkd\" (UniqueName: \"kubernetes.io/projected/08831ffc-023d-47cc-aee4-7ad879ef7ace-kube-api-access-wtvkd\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.734270 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.881807 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a745c3-d66b-447a-bf7e-386ac88bb05f" path="/var/lib/kubelet/pods/21a745c3-d66b-447a-bf7e-386ac88bb05f/volumes" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.887336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" event={"ID":"67c6486e-03d6-4215-831c-c87eac890517","Type":"ContainerDied","Data":"89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0"} Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.887411 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89832d452e643e1913ac055e8745b513c3aef5b8d52a64296fd0c726c53efad0" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.887540 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.907192 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-l7f52" event={"ID":"015c5e49-7907-4c6c-a3b3-7416c2bdefad","Type":"ContainerDied","Data":"3f87a88eb780e725d1f96ef4ad725d5c0fec60af81884938bc2fd88d3e1897ca"} Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.907382 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f87a88eb780e725d1f96ef4ad725d5c0fec60af81884938bc2fd88d3e1897ca" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.907536 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-l7f52" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.911970 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-nhs2x-config-g8w2v" event={"ID":"08831ffc-023d-47cc-aee4-7ad879ef7ace","Type":"ContainerDied","Data":"bc82b125de746e0bea411edc1935d18aae261c72c941671886267dcfadd589f5"} Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.912041 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc82b125de746e0bea411edc1935d18aae261c72c941671886267dcfadd589f5" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.912155 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-nhs2x-config-g8w2v" Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.921677 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"9cb24095b48370933a297bf5f1b0dc575a50a3ed05630b1612f10ed32331d586"} Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.936734 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerStarted","Data":"56d5fef9538266a86662e8798f47a220c5a5f10452dd95813df2653ccee80919"} Nov 22 07:39:43 crc kubenswrapper[4853]: I1122 07:39:43.972531 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-nhs2x" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.397769 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.476372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts\") pod \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.476459 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5tm\" (UniqueName: \"kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm\") pod \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\" (UID: \"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd\") " Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.477945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" (UID: "3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.487091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm" (OuterVolumeSpecName: "kube-api-access-zt5tm") pod "3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" (UID: "3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd"). InnerVolumeSpecName "kube-api-access-zt5tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.526818 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-nhs2x-config-g8w2v"] Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.536445 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-nhs2x-config-g8w2v"] Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.583563 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.583615 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5tm\" (UniqueName: \"kubernetes.io/projected/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd-kube-api-access-zt5tm\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.954611 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"ce8f21b592b2c108196b02ea6b947e68b6362efa3463359c2333d965d0edb3aa"} Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.955019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"aa51578947b11d6cea715a1220eb160664405ee8791af1315443374a76e6a880"} Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.955032 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"41da5f7d67661b8f162dfe2e375fe21e094af1ade0dea5d7f97208a92271f8a5"} Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.958731 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g6crm" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="registry-server" containerID="cri-o://db9973b3e3d2b327ce59afd23637406d3b7d1f71698fbbb6b75902f617d0ec61" gracePeriod=2 Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.959163 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.959271 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ce61-account-create-qm8lf" event={"ID":"3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd","Type":"ContainerDied","Data":"c2859dde6d28687ef12f484f7849ee4d77658778d6e5a0f0858080092339b66b"} Nov 22 07:39:44 crc kubenswrapper[4853]: I1122 07:39:44.959325 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2859dde6d28687ef12f484f7849ee4d77658778d6e5a0f0858080092339b66b" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.103946 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.245503 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:39:45 crc kubenswrapper[4853]: E1122 07:39:45.248374 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08831ffc-023d-47cc-aee4-7ad879ef7ace" containerName="ovn-config" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.248421 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="08831ffc-023d-47cc-aee4-7ad879ef7ace" containerName="ovn-config" Nov 22 07:39:45 crc kubenswrapper[4853]: E1122 07:39:45.248450 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015c5e49-7907-4c6c-a3b3-7416c2bdefad" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.248461 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="015c5e49-7907-4c6c-a3b3-7416c2bdefad" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: E1122 07:39:45.248481 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" containerName="mariadb-account-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.248491 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" containerName="mariadb-account-create" Nov 22 07:39:45 crc kubenswrapper[4853]: E1122 07:39:45.248519 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c6486e-03d6-4215-831c-c87eac890517" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.248530 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c6486e-03d6-4215-831c-c87eac890517" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.251286 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="015c5e49-7907-4c6c-a3b3-7416c2bdefad" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.251352 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="08831ffc-023d-47cc-aee4-7ad879ef7ace" containerName="ovn-config" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.251383 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c6486e-03d6-4215-831c-c87eac890517" containerName="mariadb-database-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.251403 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" containerName="mariadb-account-create" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.252729 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.263010 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.261733 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.301951 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.302201 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.302253 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j5jx\" (UniqueName: \"kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.404401 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j5jx\" (UniqueName: \"kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.404629 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.404885 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.417534 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.421418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.451095 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j5jx\" (UniqueName: \"kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx\") pod \"mysqld-exporter-0\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.581393 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:39:45 crc kubenswrapper[4853]: I1122 07:39:45.775870 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08831ffc-023d-47cc-aee4-7ad879ef7ace" path="/var/lib/kubelet/pods/08831ffc-023d-47cc-aee4-7ad879ef7ace/volumes" Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.019950 4853 generic.go:334] "Generic (PLEG): container finished" podID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerID="db9973b3e3d2b327ce59afd23637406d3b7d1f71698fbbb6b75902f617d0ec61" exitCode=0 Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.020151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerDied","Data":"db9973b3e3d2b327ce59afd23637406d3b7d1f71698fbbb6b75902f617d0ec61"} Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.385945 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.880254 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.945578 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities\") pod \"e3ead9cf-10b5-45ec-82e2-9083c221e150\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.945896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgbg8\" (UniqueName: \"kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8\") pod \"e3ead9cf-10b5-45ec-82e2-9083c221e150\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.946070 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content\") pod \"e3ead9cf-10b5-45ec-82e2-9083c221e150\" (UID: \"e3ead9cf-10b5-45ec-82e2-9083c221e150\") " Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.946572 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities" (OuterVolumeSpecName: "utilities") pod "e3ead9cf-10b5-45ec-82e2-9083c221e150" (UID: "e3ead9cf-10b5-45ec-82e2-9083c221e150"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.946733 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:46 crc kubenswrapper[4853]: I1122 07:39:46.979736 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8" (OuterVolumeSpecName: "kube-api-access-pgbg8") pod "e3ead9cf-10b5-45ec-82e2-9083c221e150" (UID: "e3ead9cf-10b5-45ec-82e2-9083c221e150"). InnerVolumeSpecName "kube-api-access-pgbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:46.999989 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3ead9cf-10b5-45ec-82e2-9083c221e150" (UID: "e3ead9cf-10b5-45ec-82e2-9083c221e150"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.036944 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"41a12382-0497-4150-b1bb-002d4df97f20","Type":"ContainerStarted","Data":"9e0bcdf8fc60bdaa6cc9ba4824aac4a7eb554c43ba44e4608533d1f2a446cb71"} Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.042430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g6crm" event={"ID":"e3ead9cf-10b5-45ec-82e2-9083c221e150","Type":"ContainerDied","Data":"b0b55301a3adc57fe15007ad7afc608ba662eace2fda2b69d8fd857f98a32118"} Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.042503 4853 scope.go:117] "RemoveContainer" containerID="db9973b3e3d2b327ce59afd23637406d3b7d1f71698fbbb6b75902f617d0ec61" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.042516 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g6crm" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.049546 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3ead9cf-10b5-45ec-82e2-9083c221e150-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.049604 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgbg8\" (UniqueName: \"kubernetes.io/projected/e3ead9cf-10b5-45ec-82e2-9083c221e150-kube-api-access-pgbg8\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.291103 4853 scope.go:117] "RemoveContainer" containerID="9514c7a722eaeaf5b2f0650875dd3e644de6deee2140bdd8adc5cd8724902bf8" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.319907 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.340350 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g6crm"] Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.421412 4853 scope.go:117] "RemoveContainer" containerID="c56fc9de6928234567717db0380d59af58729a7f274dd6902ae9a730f8f6c053" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.686723 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-twqq5"] Nov 22 07:39:47 crc kubenswrapper[4853]: E1122 07:39:47.687651 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="extract-content" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.687674 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="extract-content" Nov 22 07:39:47 crc kubenswrapper[4853]: E1122 07:39:47.687689 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="extract-utilities" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.687717 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="extract-utilities" Nov 22 07:39:47 crc kubenswrapper[4853]: E1122 07:39:47.687774 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="registry-server" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.687784 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="registry-server" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.688176 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" containerName="registry-server" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.689587 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.692875 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.693437 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cxqr6" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.705577 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-twqq5"] Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.767985 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ead9cf-10b5-45ec-82e2-9083c221e150" path="/var/lib/kubelet/pods/e3ead9cf-10b5-45ec-82e2-9083c221e150/volumes" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.770364 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.770407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.770673 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ljd7\" (UniqueName: \"kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.771169 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.875255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.877123 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.877603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ljd7\" (UniqueName: \"kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.877923 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.884249 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.884289 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.889673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:47 crc kubenswrapper[4853]: I1122 07:39:47.896454 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ljd7\" (UniqueName: \"kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7\") pod \"glance-db-sync-twqq5\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:48 crc kubenswrapper[4853]: I1122 07:39:48.021619 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-twqq5" Nov 22 07:39:48 crc kubenswrapper[4853]: I1122 07:39:48.070612 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"b4e2fb92e94ee4cfb4de87cc2764e5d6b437d5fdfd8b465ecf0e93da388957e1"} Nov 22 07:39:49 crc kubenswrapper[4853]: I1122 07:39:49.084370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerStarted","Data":"6f1ca0bd5d2917ab4b8b0a326bf4e67ad2ef65e15680ee4f8b3bdc346f4d99de"} Nov 22 07:39:49 crc kubenswrapper[4853]: I1122 07:39:49.918731 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-twqq5"] Nov 22 07:39:50 crc kubenswrapper[4853]: W1122 07:39:50.071295 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2dc7c1e_0083_4eab_80f2_eec435f5c97a.slice/crio-699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074 WatchSource:0}: Error finding container 699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074: Status 404 returned error can't find the container with id 699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074 Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.099933 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"d5ef86da752fe95b6b777b10fe3992c1f189e581f26e5abe441a43414e7df68e"} Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.101811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-twqq5" event={"ID":"e2dc7c1e-0083-4eab-80f2-eec435f5c97a","Type":"ContainerStarted","Data":"699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074"} Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.376949 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.472675 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tv8h9" Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.652401 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tv8h9"] Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.716150 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.716455 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tdfrh" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="registry-server" containerID="cri-o://1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4" gracePeriod=2 Nov 22 07:39:50 crc kubenswrapper[4853]: I1122 07:39:50.748257 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:39:50 crc kubenswrapper[4853]: E1122 07:39:50.748857 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:39:51 crc kubenswrapper[4853]: I1122 07:39:51.883904 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:39:51 crc kubenswrapper[4853]: I1122 07:39:51.907935 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content\") pod \"4e241aed-043d-4b92-9f04-2a36511cff3b\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " Nov 22 07:39:51 crc kubenswrapper[4853]: I1122 07:39:51.908028 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities\") pod \"4e241aed-043d-4b92-9f04-2a36511cff3b\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " Nov 22 07:39:51 crc kubenswrapper[4853]: I1122 07:39:51.908169 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nv2v\" (UniqueName: \"kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v\") pod \"4e241aed-043d-4b92-9f04-2a36511cff3b\" (UID: \"4e241aed-043d-4b92-9f04-2a36511cff3b\") " Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.133305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"92e4d54282f704b334d3f296a636980d8b4cd13e49d0a885da9947021ee14aba"} Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.136567 4853 generic.go:334] "Generic (PLEG): container finished" podID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerID="1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4" exitCode=0 Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.137186 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tdfrh" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.137842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerDied","Data":"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4"} Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.137881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tdfrh" event={"ID":"4e241aed-043d-4b92-9f04-2a36511cff3b","Type":"ContainerDied","Data":"a41dbaf8c5f6a4396bd27d36f769ca6a4ad4ea8ea2eb05b38104a822159fd768"} Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.137901 4853 scope.go:117] "RemoveContainer" containerID="1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.173144 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities" (OuterVolumeSpecName: "utilities") pod "4e241aed-043d-4b92-9f04-2a36511cff3b" (UID: "4e241aed-043d-4b92-9f04-2a36511cff3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.209980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v" (OuterVolumeSpecName: "kube-api-access-9nv2v") pod "4e241aed-043d-4b92-9f04-2a36511cff3b" (UID: "4e241aed-043d-4b92-9f04-2a36511cff3b"). InnerVolumeSpecName "kube-api-access-9nv2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.221298 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nv2v\" (UniqueName: \"kubernetes.io/projected/4e241aed-043d-4b92-9f04-2a36511cff3b-kube-api-access-9nv2v\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.221346 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.223003 4853 scope.go:117] "RemoveContainer" containerID="80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.597251 4853 scope.go:117] "RemoveContainer" containerID="5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.613495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e241aed-043d-4b92-9f04-2a36511cff3b" (UID: "4e241aed-043d-4b92-9f04-2a36511cff3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.631046 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e241aed-043d-4b92-9f04-2a36511cff3b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.638109 4853 scope.go:117] "RemoveContainer" containerID="1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4" Nov 22 07:39:52 crc kubenswrapper[4853]: E1122 07:39:52.639032 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4\": container with ID starting with 1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4 not found: ID does not exist" containerID="1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.639109 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4"} err="failed to get container status \"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4\": rpc error: code = NotFound desc = could not find container \"1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4\": container with ID starting with 1a56524613430b8914b9ad2a7b0102b9775d79d1b1f35a102b1b5d13aec37ee4 not found: ID does not exist" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.639160 4853 scope.go:117] "RemoveContainer" containerID="80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc" Nov 22 07:39:52 crc kubenswrapper[4853]: E1122 07:39:52.639651 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc\": container with ID starting with 80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc not found: ID does not exist" containerID="80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.639708 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc"} err="failed to get container status \"80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc\": rpc error: code = NotFound desc = could not find container \"80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc\": container with ID starting with 80a844bb7c41bbd15ee0c57e4dd83c07112803ff2aae049b196d7437703899fc not found: ID does not exist" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.639766 4853 scope.go:117] "RemoveContainer" containerID="5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658" Nov 22 07:39:52 crc kubenswrapper[4853]: E1122 07:39:52.640100 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658\": container with ID starting with 5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658 not found: ID does not exist" containerID="5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.640160 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658"} err="failed to get container status \"5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658\": rpc error: code = NotFound desc = could not find container \"5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658\": container with ID starting with 5a950d09cc06e00b39bb489bb05e204a79434de4855956fcd9c6854b34967658 not found: ID does not exist" Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.782529 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:39:52 crc kubenswrapper[4853]: I1122 07:39:52.791537 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tdfrh"] Nov 22 07:39:53 crc kubenswrapper[4853]: I1122 07:39:53.765623 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" path="/var/lib/kubelet/pods/4e241aed-043d-4b92-9f04-2a36511cff3b/volumes" Nov 22 07:39:54 crc kubenswrapper[4853]: I1122 07:39:54.166372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"774dec2d679a8dbcd9527926e7b411c398c137133a3eaeeb0eef23f4d49d6c7a"} Nov 22 07:39:56 crc kubenswrapper[4853]: I1122 07:39:56.191443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"41a12382-0497-4150-b1bb-002d4df97f20","Type":"ContainerStarted","Data":"1ddf0290074287948ab8425e368fb8d4158583b7821046c8f62b935c219ccd0f"} Nov 22 07:39:56 crc kubenswrapper[4853]: I1122 07:39:56.224154 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.917331581 podStartE2EDuration="11.224121232s" podCreationTimestamp="2025-11-22 07:39:45 +0000 UTC" firstStartedPulling="2025-11-22 07:39:46.895722934 +0000 UTC m=+1785.736345560" lastFinishedPulling="2025-11-22 07:39:55.202512585 +0000 UTC m=+1794.043135211" observedRunningTime="2025-11-22 07:39:56.211672052 +0000 UTC m=+1795.052294678" watchObservedRunningTime="2025-11-22 07:39:56.224121232 +0000 UTC m=+1795.064743858" Nov 22 07:39:57 crc kubenswrapper[4853]: I1122 07:39:57.207028 4853 generic.go:334] "Generic (PLEG): container finished" podID="d0e9072b-3e2a-4283-a697-8411049c5161" containerID="191995656bf4f31e2276dad55fca2b424abcadafb5511c17ace128a41f95ec41" exitCode=0 Nov 22 07:39:57 crc kubenswrapper[4853]: I1122 07:39:57.207468 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerDied","Data":"191995656bf4f31e2276dad55fca2b424abcadafb5511c17ace128a41f95ec41"} Nov 22 07:39:58 crc kubenswrapper[4853]: I1122 07:39:58.221622 4853 generic.go:334] "Generic (PLEG): container finished" podID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerID="8e8749dd25d8b57e51e1b4ef9317ecadcde4606ab344737ff6cd9ad213c23386" exitCode=0 Nov 22 07:39:58 crc kubenswrapper[4853]: I1122 07:39:58.221717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerDied","Data":"8e8749dd25d8b57e51e1b4ef9317ecadcde4606ab344737ff6cd9ad213c23386"} Nov 22 07:39:58 crc kubenswrapper[4853]: I1122 07:39:58.224046 4853 generic.go:334] "Generic (PLEG): container finished" podID="78a8c429-b429-44e1-be5e-3eb355ae4d54" containerID="6f1ca0bd5d2917ab4b8b0a326bf4e67ad2ef65e15680ee4f8b3bdc346f4d99de" exitCode=0 Nov 22 07:39:58 crc kubenswrapper[4853]: I1122 07:39:58.224082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerDied","Data":"6f1ca0bd5d2917ab4b8b0a326bf4e67ad2ef65e15680ee4f8b3bdc346f4d99de"} Nov 22 07:39:59 crc kubenswrapper[4853]: I1122 07:39:59.239938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerStarted","Data":"0e003a69a0e991d51e41353ef249892b756dc703253c447166b6a6ebeafb41ba"} Nov 22 07:39:59 crc kubenswrapper[4853]: I1122 07:39:59.240669 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:39:59 crc kubenswrapper[4853]: I1122 07:39:59.281323 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.622297618 podStartE2EDuration="4m2.28129559s" podCreationTimestamp="2025-11-22 07:35:57 +0000 UTC" firstStartedPulling="2025-11-22 07:36:00.265773718 +0000 UTC m=+1559.106396344" lastFinishedPulling="2025-11-22 07:39:17.92477169 +0000 UTC m=+1756.765394316" observedRunningTime="2025-11-22 07:39:59.276619255 +0000 UTC m=+1798.117241901" watchObservedRunningTime="2025-11-22 07:39:59.28129559 +0000 UTC m=+1798.121918226" Nov 22 07:40:01 crc kubenswrapper[4853]: I1122 07:40:01.748319 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:40:01 crc kubenswrapper[4853]: E1122 07:40:01.749385 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:40:09 crc kubenswrapper[4853]: I1122 07:40:09.584495 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:40:16 crc kubenswrapper[4853]: I1122 07:40:16.748919 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:40:16 crc kubenswrapper[4853]: E1122 07:40:16.750007 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:40:18 crc kubenswrapper[4853]: E1122 07:40:18.747932 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 22 07:40:18 crc kubenswrapper[4853]: E1122 07:40:18.748575 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ljd7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-twqq5_openstack(e2dc7c1e-0083-4eab-80f2-eec435f5c97a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:40:18 crc kubenswrapper[4853]: E1122 07:40:18.749688 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-twqq5" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" Nov 22 07:40:19 crc kubenswrapper[4853]: E1122 07:40:19.487107 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-twqq5" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" Nov 22 07:40:19 crc kubenswrapper[4853]: I1122 07:40:19.582849 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.508579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"a36a027b7804f22c0a41ac728a3dafd7d17dc170ce9e2c2ec4cea45fd2631f05"} Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.509398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"e992e74005f8ddd6541fd278c8bc91ed34397b34ae6807f76ad4a6e69f193582"} Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.509418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"877f12184c625601929deb88bf706cf611d4078be2ae3feb41a340de18d3dc2e"} Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.511158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerStarted","Data":"9c5bd95c35228d58bf34b8225ce8dbcb5740ed4739cc97152d70ae88d49e62d7"} Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.511383 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.514373 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerStarted","Data":"12e6d4696709473538943b6495249d35041962d04344034809fba4063a4beca2"} Nov 22 07:40:20 crc kubenswrapper[4853]: I1122 07:40:20.553723 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=67.769973632 podStartE2EDuration="4m23.553697637s" podCreationTimestamp="2025-11-22 07:35:57 +0000 UTC" firstStartedPulling="2025-11-22 07:36:00.104308784 +0000 UTC m=+1558.944931410" lastFinishedPulling="2025-11-22 07:39:15.888032789 +0000 UTC m=+1754.728655415" observedRunningTime="2025-11-22 07:40:20.539412486 +0000 UTC m=+1819.380035132" watchObservedRunningTime="2025-11-22 07:40:20.553697637 +0000 UTC m=+1819.394320263" Nov 22 07:40:21 crc kubenswrapper[4853]: I1122 07:40:21.532191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"f94e8efd3e9616bae96ac3fe4bae2b3300ac14ece1293a2326b6685bef7eb0cb"} Nov 22 07:40:21 crc kubenswrapper[4853]: I1122 07:40:21.532711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"3e90d4cc906036e70315d34c27a9a2830159a0069b2b19df5e0f7b11e4ecde6e"} Nov 22 07:40:23 crc kubenswrapper[4853]: I1122 07:40:23.565805 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"1692810fe89d8414dcff1ed11e443703aef2f0dfb58b2774aa82a9f7e47b763f"} Nov 22 07:40:24 crc kubenswrapper[4853]: I1122 07:40:24.583261 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerStarted","Data":"12f992bf23edcde02c31fa63958e3228599e115e1bfc58f63da7f2c88c32a1a9"} Nov 22 07:40:24 crc kubenswrapper[4853]: I1122 07:40:24.604420 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d4427668-9ef6-4594-ae35-ff983a6af324","Type":"ContainerStarted","Data":"73abed23878421c78c642ce09af12d5a24e2a53ea549e7499870e6344dfcb83f"} Nov 22 07:40:25 crc kubenswrapper[4853]: I1122 07:40:25.676926 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=72.94506759 podStartE2EDuration="1m51.676898507s" podCreationTimestamp="2025-11-22 07:38:34 +0000 UTC" firstStartedPulling="2025-11-22 07:39:40.614370006 +0000 UTC m=+1779.454992632" lastFinishedPulling="2025-11-22 07:40:19.346200923 +0000 UTC m=+1818.186823549" observedRunningTime="2025-11-22 07:40:25.666370635 +0000 UTC m=+1824.506993271" watchObservedRunningTime="2025-11-22 07:40:25.676898507 +0000 UTC m=+1824.517521133" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.099363 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:40:26 crc kubenswrapper[4853]: E1122 07:40:26.100489 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="extract-utilities" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.100508 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="extract-utilities" Nov 22 07:40:26 crc kubenswrapper[4853]: E1122 07:40:26.100542 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="extract-content" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.100548 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="extract-content" Nov 22 07:40:26 crc kubenswrapper[4853]: E1122 07:40:26.100633 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="registry-server" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.100644 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="registry-server" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.101036 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e241aed-043d-4b92-9f04-2a36511cff3b" containerName="registry-server" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.103454 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.113803 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.138409 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.181892 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.182111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjpjf\" (UniqueName: \"kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.182216 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.182259 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.182325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.182737 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjpjf\" (UniqueName: \"kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285610 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285640 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285694 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.285799 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.287598 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.287828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.287971 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.288083 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.288308 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.315844 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjpjf\" (UniqueName: \"kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf\") pod \"dnsmasq-dns-6d5b6d6b67-mw24j\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.439002 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.638067 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"78a8c429-b429-44e1-be5e-3eb355ae4d54","Type":"ContainerStarted","Data":"99781faf1c74df0bd6656360198e803517f6488fcfc171b2169c7403c50c9573"} Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.687453 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=45.687419959 podStartE2EDuration="45.687419959s" podCreationTimestamp="2025-11-22 07:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:26.683059893 +0000 UTC m=+1825.523682519" watchObservedRunningTime="2025-11-22 07:40:26.687419959 +0000 UTC m=+1825.528042585" Nov 22 07:40:26 crc kubenswrapper[4853]: I1122 07:40:26.964558 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:40:26 crc kubenswrapper[4853]: W1122 07:40:26.974352 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6de9b7c8_6d38_4338_9a33_0084a0981c40.slice/crio-48c989ab55d52d6cabd67b8e8fbd292418e528da8c926e43c114e97ba0172a04 WatchSource:0}: Error finding container 48c989ab55d52d6cabd67b8e8fbd292418e528da8c926e43c114e97ba0172a04: Status 404 returned error can't find the container with id 48c989ab55d52d6cabd67b8e8fbd292418e528da8c926e43c114e97ba0172a04 Nov 22 07:40:27 crc kubenswrapper[4853]: I1122 07:40:27.355720 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 07:40:27 crc kubenswrapper[4853]: I1122 07:40:27.356338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 07:40:27 crc kubenswrapper[4853]: I1122 07:40:27.359346 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/prometheus-metric-storage-0" podUID="78a8c429-b429-44e1-be5e-3eb355ae4d54" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.162:9090/-/ready\": dial tcp 10.217.0.162:9090: connect: connection refused" Nov 22 07:40:27 crc kubenswrapper[4853]: I1122 07:40:27.650940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerStarted","Data":"48c989ab55d52d6cabd67b8e8fbd292418e528da8c926e43c114e97ba0172a04"} Nov 22 07:40:28 crc kubenswrapper[4853]: I1122 07:40:28.666929 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerStarted","Data":"f55c1c7da1dda0ac3167a3901912328243165a0eaf64be85bf35e52772bca7d9"} Nov 22 07:40:29 crc kubenswrapper[4853]: I1122 07:40:29.227324 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:40:29 crc kubenswrapper[4853]: I1122 07:40:29.582044 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:40:29 crc kubenswrapper[4853]: I1122 07:40:29.682450 4853 generic.go:334] "Generic (PLEG): container finished" podID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerID="f55c1c7da1dda0ac3167a3901912328243165a0eaf64be85bf35e52772bca7d9" exitCode=0 Nov 22 07:40:29 crc kubenswrapper[4853]: I1122 07:40:29.682547 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerDied","Data":"f55c1c7da1dda0ac3167a3901912328243165a0eaf64be85bf35e52772bca7d9"} Nov 22 07:40:29 crc kubenswrapper[4853]: I1122 07:40:29.749420 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:40:29 crc kubenswrapper[4853]: E1122 07:40:29.750166 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:40:31 crc kubenswrapper[4853]: I1122 07:40:31.713949 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerStarted","Data":"9292e7ea65db7971a45b24ccd7c1893f6da9594a6c251c8a86eb64875ec79a7b"} Nov 22 07:40:31 crc kubenswrapper[4853]: I1122 07:40:31.714475 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:31 crc kubenswrapper[4853]: I1122 07:40:31.753897 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podStartSLOduration=5.753864744 podStartE2EDuration="5.753864744s" podCreationTimestamp="2025-11-22 07:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:40:31.741215417 +0000 UTC m=+1830.581838043" watchObservedRunningTime="2025-11-22 07:40:31.753864744 +0000 UTC m=+1830.594487370" Nov 22 07:40:33 crc kubenswrapper[4853]: I1122 07:40:33.751857 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:40:36 crc kubenswrapper[4853]: I1122 07:40:36.441073 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:40:36 crc kubenswrapper[4853]: I1122 07:40:36.540739 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:40:36 crc kubenswrapper[4853]: I1122 07:40:36.541565 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="dnsmasq-dns" containerID="cri-o://02e619043bb2a42b286d9f3afb1e5b6b88da2a3c334f5b3fce805ca3c7a0d57a" gracePeriod=10 Nov 22 07:40:38 crc kubenswrapper[4853]: I1122 07:40:38.819733 4853 generic.go:334] "Generic (PLEG): container finished" podID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerID="02e619043bb2a42b286d9f3afb1e5b6b88da2a3c334f5b3fce805ca3c7a0d57a" exitCode=0 Nov 22 07:40:38 crc kubenswrapper[4853]: I1122 07:40:38.819814 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" event={"ID":"000b20b5-bfcd-44c2-9859-bb30ff5d5123","Type":"ContainerDied","Data":"02e619043bb2a42b286d9f3afb1e5b6b88da2a3c334f5b3fce805ca3c7a0d57a"} Nov 22 07:40:39 crc kubenswrapper[4853]: I1122 07:40:39.223381 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:40:39 crc kubenswrapper[4853]: I1122 07:40:39.585922 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.807393 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.886096 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" event={"ID":"000b20b5-bfcd-44c2-9859-bb30ff5d5123","Type":"ContainerDied","Data":"b3857a3965c4a9734da168e13e6c80a4f32914a0f64bab9d6370f0dd69c30c9b"} Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.886187 4853 scope.go:117] "RemoveContainer" containerID="02e619043bb2a42b286d9f3afb1e5b6b88da2a3c334f5b3fce805ca3c7a0d57a" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.886438 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.903176 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb\") pod \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.903488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc\") pod \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.903666 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n9pr\" (UniqueName: \"kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr\") pod \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.903700 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb\") pod \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.903791 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config\") pod \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\" (UID: \"000b20b5-bfcd-44c2-9859-bb30ff5d5123\") " Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.911597 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr" (OuterVolumeSpecName: "kube-api-access-7n9pr") pod "000b20b5-bfcd-44c2-9859-bb30ff5d5123" (UID: "000b20b5-bfcd-44c2-9859-bb30ff5d5123"). InnerVolumeSpecName "kube-api-access-7n9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.966544 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config" (OuterVolumeSpecName: "config") pod "000b20b5-bfcd-44c2-9859-bb30ff5d5123" (UID: "000b20b5-bfcd-44c2-9859-bb30ff5d5123"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.967040 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "000b20b5-bfcd-44c2-9859-bb30ff5d5123" (UID: "000b20b5-bfcd-44c2-9859-bb30ff5d5123"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.971710 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "000b20b5-bfcd-44c2-9859-bb30ff5d5123" (UID: "000b20b5-bfcd-44c2-9859-bb30ff5d5123"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:41 crc kubenswrapper[4853]: I1122 07:40:41.980371 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "000b20b5-bfcd-44c2-9859-bb30ff5d5123" (UID: "000b20b5-bfcd-44c2-9859-bb30ff5d5123"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.007010 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.007054 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n9pr\" (UniqueName: \"kubernetes.io/projected/000b20b5-bfcd-44c2-9859-bb30ff5d5123-kube-api-access-7n9pr\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.007073 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.007087 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.007097 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/000b20b5-bfcd-44c2-9859-bb30ff5d5123-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.239724 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.253595 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-9qbfw"] Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.365812 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.372922 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 07:40:42 crc kubenswrapper[4853]: I1122 07:40:42.594835 4853 scope.go:117] "RemoveContainer" containerID="da4d30664f2e6272fc1883499e673c5bd12e9edcf41095153fa275cdac07510e" Nov 22 07:40:43 crc kubenswrapper[4853]: I1122 07:40:43.748175 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:40:43 crc kubenswrapper[4853]: E1122 07:40:43.748940 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:40:43 crc kubenswrapper[4853]: I1122 07:40:43.766190 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" path="/var/lib/kubelet/pods/000b20b5-bfcd-44c2-9859-bb30ff5d5123/volumes" Nov 22 07:40:43 crc kubenswrapper[4853]: I1122 07:40:43.924298 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-twqq5" event={"ID":"e2dc7c1e-0083-4eab-80f2-eec435f5c97a","Type":"ContainerStarted","Data":"20af5ec328f6909943bbc0870f254b43a734f0691be83697769a97b8f6d3ddd2"} Nov 22 07:40:44 crc kubenswrapper[4853]: I1122 07:40:44.605295 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-9qbfw" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 22 07:40:44 crc kubenswrapper[4853]: I1122 07:40:44.951247 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-twqq5" podStartSLOduration=5.429765662 podStartE2EDuration="57.951204911s" podCreationTimestamp="2025-11-22 07:39:47 +0000 UTC" firstStartedPulling="2025-11-22 07:39:50.074737868 +0000 UTC m=+1788.915360494" lastFinishedPulling="2025-11-22 07:40:42.596177117 +0000 UTC m=+1841.436799743" observedRunningTime="2025-11-22 07:40:44.94964547 +0000 UTC m=+1843.790268096" watchObservedRunningTime="2025-11-22 07:40:44.951204911 +0000 UTC m=+1843.791827537" Nov 22 07:40:49 crc kubenswrapper[4853]: I1122 07:40:49.223441 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:40:49 crc kubenswrapper[4853]: I1122 07:40:49.583666 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:40:56 crc kubenswrapper[4853]: I1122 07:40:56.748618 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:40:56 crc kubenswrapper[4853]: E1122 07:40:56.750057 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:40:59 crc kubenswrapper[4853]: I1122 07:40:59.223868 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:40:59 crc kubenswrapper[4853]: I1122 07:40:59.582193 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:41:09 crc kubenswrapper[4853]: I1122 07:41:09.225993 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:41:09 crc kubenswrapper[4853]: I1122 07:41:09.582201 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:41:11 crc kubenswrapper[4853]: I1122 07:41:11.748738 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:41:11 crc kubenswrapper[4853]: E1122 07:41:11.749391 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:41:19 crc kubenswrapper[4853]: I1122 07:41:19.223477 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:41:19 crc kubenswrapper[4853]: I1122 07:41:19.582647 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:41:24 crc kubenswrapper[4853]: I1122 07:41:24.748977 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:41:24 crc kubenswrapper[4853]: E1122 07:41:24.750524 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:41:26 crc kubenswrapper[4853]: I1122 07:41:26.750085 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.226070 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.723705 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jt4hx"] Nov 22 07:41:29 crc kubenswrapper[4853]: E1122 07:41:29.733406 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="init" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.733447 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="init" Nov 22 07:41:29 crc kubenswrapper[4853]: E1122 07:41:29.733496 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="dnsmasq-dns" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.733503 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="dnsmasq-dns" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.733714 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="000b20b5-bfcd-44c2-9859-bb30ff5d5123" containerName="dnsmasq-dns" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.734680 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.813198 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jt4hx"] Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.858298 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.858467 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v9f4\" (UniqueName: \"kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.895739 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4p4mm"] Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.897735 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.932600 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4p4mm"] Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.961564 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.961692 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v9f4\" (UniqueName: \"kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:29 crc kubenswrapper[4853]: I1122 07:41:29.962947 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.018116 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v9f4\" (UniqueName: \"kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4\") pod \"cinder-db-create-jt4hx\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.018849 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3943-account-create-4vld8"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.020593 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.025016 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.049820 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3943-account-create-4vld8"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.067893 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thpn8\" (UniqueName: \"kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.067986 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.083480 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jt4hx" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.125856 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-lgnfr"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.127808 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.159297 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lgnfr"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.166896 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-caba-account-create-ltq46"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.169116 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.176077 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.183105 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-caba-account-create-ltq46"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.187726 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thpn8\" (UniqueName: \"kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.187919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.188069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvbct\" (UniqueName: \"kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.188123 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.189877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.226576 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thpn8\" (UniqueName: \"kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8\") pod \"barbican-db-create-4p4mm\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.249374 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4p4mm" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.272444 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2vhq9"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.281922 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.286865 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.287085 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.287216 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.287340 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n8jmf" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.295580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.295858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.296144 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wmj\" (UniqueName: \"kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.296257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzkq\" (UniqueName: \"kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.297426 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.298021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.298162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvbct\" (UniqueName: \"kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.307153 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2vhq9"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.309949 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-48f1-account-create-vlh5d"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.314986 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.338121 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvbct\" (UniqueName: \"kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct\") pod \"heat-3943-account-create-4vld8\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.338379 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k59s8\" (UniqueName: \"kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89wmj\" (UniqueName: \"kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400777 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzkq\" (UniqueName: \"kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400847 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.400966 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkgqz\" (UniqueName: \"kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.401080 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.401111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.402968 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.403725 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.418565 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48f1-account-create-vlh5d"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.449247 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.465558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzkq\" (UniqueName: \"kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq\") pod \"heat-db-create-lgnfr\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.469617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89wmj\" (UniqueName: \"kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj\") pod \"cinder-caba-account-create-ltq46\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.504517 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.504637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k59s8\" (UniqueName: \"kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.504717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.504779 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.504821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkgqz\" (UniqueName: \"kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.507331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.514495 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.531009 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.536170 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkgqz\" (UniqueName: \"kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz\") pod \"keystone-db-sync-2vhq9\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.605764 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7dn2k"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.608027 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.647820 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-80dd-account-create-4fgwn"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.650257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.653203 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.659909 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dn2k"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.675112 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-80dd-account-create-4fgwn"] Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.697283 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lgnfr" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.712307 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfcz\" (UniqueName: \"kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.712403 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.712790 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.713180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpdw4\" (UniqueName: \"kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.718533 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.780605 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.815847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.816088 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpdw4\" (UniqueName: \"kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.816200 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgfcz\" (UniqueName: \"kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:30 crc kubenswrapper[4853]: I1122 07:41:30.816263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.093187 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k59s8\" (UniqueName: \"kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8\") pod \"barbican-48f1-account-create-vlh5d\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.238417 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.247978 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpdw4\" (UniqueName: \"kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4\") pod \"neutron-db-create-7dn2k\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.287518 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.291316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgfcz\" (UniqueName: \"kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz\") pod \"neutron-80dd-account-create-4fgwn\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.388239 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.539102 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dn2k" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.587936 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.869704 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4p4mm"] Nov 22 07:41:31 crc kubenswrapper[4853]: I1122 07:41:31.871533 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jt4hx"] Nov 22 07:41:31 crc kubenswrapper[4853]: W1122 07:41:31.908564 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2e23ab_b228_4a69_866d_f16a8d51966a.slice/crio-7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5 WatchSource:0}: Error finding container 7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5: Status 404 returned error can't find the container with id 7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5 Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.207282 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3943-account-create-4vld8"] Nov 22 07:41:32 crc kubenswrapper[4853]: W1122 07:41:32.423341 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod937b4e80_b6f5_4e62_8053_05ce38b1b105.slice/crio-e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf WatchSource:0}: Error finding container e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf: Status 404 returned error can't find the container with id e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.423855 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2vhq9"] Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.436292 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-lgnfr"] Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.455229 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-caba-account-create-ltq46"] Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.537140 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-48f1-account-create-vlh5d"] Nov 22 07:41:32 crc kubenswrapper[4853]: W1122 07:41:32.551319 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb6698be_a947_4b69_9312_cd3382abefe9.slice/crio-a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f WatchSource:0}: Error finding container a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f: Status 404 returned error can't find the container with id a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.574430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4p4mm" event={"ID":"1b40a89d-79b8-4428-99b7-a0d79520e8b8","Type":"ContainerStarted","Data":"c382d0452933cc8be2226fda691c58065be74f2e8d50ad44f1af7a11c24d39f0"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.576231 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lgnfr" event={"ID":"0cb597cd-e80d-468d-8d85-ab34391e70c6","Type":"ContainerStarted","Data":"5a6348be9669a76de7c393d2d5165a3abd073c97442e59dc8583b61bfdec961f"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.577438 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jt4hx" event={"ID":"8b2e23ab-b228-4a69-866d-f16a8d51966a","Type":"ContainerStarted","Data":"7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.579488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2vhq9" event={"ID":"937b4e80-b6f5-4e62-8053-05ce38b1b105","Type":"ContainerStarted","Data":"e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.581968 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-caba-account-create-ltq46" event={"ID":"976dce54-751c-4418-9fc8-5ae4340d347f","Type":"ContainerStarted","Data":"2541941ab9daac6b9788ad3b98f4e02130c5be788ec3e1e1d9f76e136b7bed99"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.583706 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3943-account-create-4vld8" event={"ID":"edece509-f388-43e4-b8e8-c6bce0659954","Type":"ContainerStarted","Data":"9a4009ba19db3fb8c6474948ba6e7bd4d45cb35e6af6a98fd6ca86e8fb2da21e"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.585281 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48f1-account-create-vlh5d" event={"ID":"cb6698be-a947-4b69-9312-cd3382abefe9","Type":"ContainerStarted","Data":"a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f"} Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.643457 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7dn2k"] Nov 22 07:41:32 crc kubenswrapper[4853]: I1122 07:41:32.656245 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-80dd-account-create-4fgwn"] Nov 22 07:41:32 crc kubenswrapper[4853]: W1122 07:41:32.674138 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc662e6b6_1204_4d05_9b6d_b1d0c9afc613.slice/crio-23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd WatchSource:0}: Error finding container 23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd: Status 404 returned error can't find the container with id 23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd Nov 22 07:41:32 crc kubenswrapper[4853]: W1122 07:41:32.675597 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f50975_476a_4fd0_b6dd_5195dfad3931.slice/crio-87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15 WatchSource:0}: Error finding container 87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15: Status 404 returned error can't find the container with id 87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15 Nov 22 07:41:33 crc kubenswrapper[4853]: I1122 07:41:33.600397 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dn2k" event={"ID":"10f50975-476a-4fd0-b6dd-5195dfad3931","Type":"ContainerStarted","Data":"87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15"} Nov 22 07:41:33 crc kubenswrapper[4853]: I1122 07:41:33.603931 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4p4mm" event={"ID":"1b40a89d-79b8-4428-99b7-a0d79520e8b8","Type":"ContainerStarted","Data":"dffc576b2a9f3dea8dda6a5e835de5cfc9795ae112e7807c9965766116b99569"} Nov 22 07:41:33 crc kubenswrapper[4853]: I1122 07:41:33.607224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jt4hx" event={"ID":"8b2e23ab-b228-4a69-866d-f16a8d51966a","Type":"ContainerStarted","Data":"c1adc43161a657395b67cc559c53c829491e0cc513cd2949727c834c39766390"} Nov 22 07:41:33 crc kubenswrapper[4853]: I1122 07:41:33.609044 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80dd-account-create-4fgwn" event={"ID":"c662e6b6-1204-4d05-9b6d-b1d0c9afc613","Type":"ContainerStarted","Data":"23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd"} Nov 22 07:41:35 crc kubenswrapper[4853]: I1122 07:41:35.641901 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3943-account-create-4vld8" event={"ID":"edece509-f388-43e4-b8e8-c6bce0659954","Type":"ContainerStarted","Data":"dafb7503c934853811547f03915e27676375f48e68f08dd7036038b32f63db99"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.656775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-caba-account-create-ltq46" event={"ID":"976dce54-751c-4418-9fc8-5ae4340d347f","Type":"ContainerStarted","Data":"848335d0ad529a5c668173cc96d09080dfc7c9290a39d88ea7ef87c0c00c6817"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.659569 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48f1-account-create-vlh5d" event={"ID":"cb6698be-a947-4b69-9312-cd3382abefe9","Type":"ContainerStarted","Data":"915327cc8865341dc97386fd7f4ebeb4cea536bf7051d5ea872199c547bc5844"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.662029 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lgnfr" event={"ID":"0cb597cd-e80d-468d-8d85-ab34391e70c6","Type":"ContainerStarted","Data":"677ddc2c25334218fd8b8016ea3bc764045d12837a29f5c16ed48b53c2a39fcf"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.664372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80dd-account-create-4fgwn" event={"ID":"c662e6b6-1204-4d05-9b6d-b1d0c9afc613","Type":"ContainerStarted","Data":"915d440089db73eb2d99883ce7e639d4b34362febadb1b2dcadd7f233f724afc"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.666196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dn2k" event={"ID":"10f50975-476a-4fd0-b6dd-5195dfad3931","Type":"ContainerStarted","Data":"cc8617f03d625b5c1b6962819712d24a356c6e2363c4b3bbf38041fe6dbac4cf"} Nov 22 07:41:36 crc kubenswrapper[4853]: I1122 07:41:36.684888 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4p4mm" podStartSLOduration=7.684864965 podStartE2EDuration="7.684864965s" podCreationTimestamp="2025-11-22 07:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:36.683420646 +0000 UTC m=+1895.524043282" watchObservedRunningTime="2025-11-22 07:41:36.684864965 +0000 UTC m=+1895.525487591" Nov 22 07:41:37 crc kubenswrapper[4853]: I1122 07:41:37.710900 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-jt4hx" podStartSLOduration=8.710859797 podStartE2EDuration="8.710859797s" podCreationTimestamp="2025-11-22 07:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:37.699580764 +0000 UTC m=+1896.540203430" watchObservedRunningTime="2025-11-22 07:41:37.710859797 +0000 UTC m=+1896.551482463" Nov 22 07:41:38 crc kubenswrapper[4853]: I1122 07:41:38.747602 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:41:38 crc kubenswrapper[4853]: E1122 07:41:38.747994 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.755106 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-caba-account-create-ltq46" podStartSLOduration=11.755076823 podStartE2EDuration="11.755076823s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.751958968 +0000 UTC m=+1900.592581604" watchObservedRunningTime="2025-11-22 07:41:41.755076823 +0000 UTC m=+1900.595699449" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.781008 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-3943-account-create-4vld8" podStartSLOduration=12.780986038 podStartE2EDuration="12.780986038s" podCreationTimestamp="2025-11-22 07:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.773204189 +0000 UTC m=+1900.613826835" watchObservedRunningTime="2025-11-22 07:41:41.780986038 +0000 UTC m=+1900.621608664" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.797831 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-48f1-account-create-vlh5d" podStartSLOduration=11.79780964 podStartE2EDuration="11.79780964s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.790147894 +0000 UTC m=+1900.630770520" watchObservedRunningTime="2025-11-22 07:41:41.79780964 +0000 UTC m=+1900.638432266" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.836711 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-80dd-account-create-4fgwn" podStartSLOduration=11.836681314 podStartE2EDuration="11.836681314s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.825091503 +0000 UTC m=+1900.665714139" watchObservedRunningTime="2025-11-22 07:41:41.836681314 +0000 UTC m=+1900.677303940" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.843919 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-lgnfr" podStartSLOduration=11.843829916 podStartE2EDuration="11.843829916s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.807920961 +0000 UTC m=+1900.648543587" watchObservedRunningTime="2025-11-22 07:41:41.843829916 +0000 UTC m=+1900.684452542" Nov 22 07:41:41 crc kubenswrapper[4853]: I1122 07:41:41.853264 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7dn2k" podStartSLOduration=11.853236368 podStartE2EDuration="11.853236368s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:41:41.840416694 +0000 UTC m=+1900.681039330" watchObservedRunningTime="2025-11-22 07:41:41.853236368 +0000 UTC m=+1900.693858994" Nov 22 07:41:53 crc kubenswrapper[4853]: I1122 07:41:53.748121 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:41:53 crc kubenswrapper[4853]: E1122 07:41:53.749231 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:42:05 crc kubenswrapper[4853]: I1122 07:42:05.758231 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:42:05 crc kubenswrapper[4853]: E1122 07:42:05.759250 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:42:09 crc kubenswrapper[4853]: E1122 07:42:09.380452 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-keystone:current-podified" Nov 22 07:42:09 crc kubenswrapper[4853]: E1122 07:42:09.381024 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:keystone-db-sync,Image:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,Command:[/bin/bash],Args:[-c keystone-manage db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/keystone/keystone.conf,SubPath:keystone.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkgqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42425,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42425,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-db-sync-2vhq9_openstack(937b4e80-b6f5-4e62-8053-05ce38b1b105): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:42:09 crc kubenswrapper[4853]: E1122 07:42:09.382229 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/keystone-db-sync-2vhq9" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" Nov 22 07:42:10 crc kubenswrapper[4853]: E1122 07:42:10.121865 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"keystone-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-keystone:current-podified\\\"\"" pod="openstack/keystone-db-sync-2vhq9" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.170104 4853 generic.go:334] "Generic (PLEG): container finished" podID="8b2e23ab-b228-4a69-866d-f16a8d51966a" containerID="c1adc43161a657395b67cc559c53c829491e0cc513cd2949727c834c39766390" exitCode=0 Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.170203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jt4hx" event={"ID":"8b2e23ab-b228-4a69-866d-f16a8d51966a","Type":"ContainerDied","Data":"c1adc43161a657395b67cc559c53c829491e0cc513cd2949727c834c39766390"} Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.173700 4853 generic.go:334] "Generic (PLEG): container finished" podID="edece509-f388-43e4-b8e8-c6bce0659954" containerID="dafb7503c934853811547f03915e27676375f48e68f08dd7036038b32f63db99" exitCode=0 Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.173793 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3943-account-create-4vld8" event={"ID":"edece509-f388-43e4-b8e8-c6bce0659954","Type":"ContainerDied","Data":"dafb7503c934853811547f03915e27676375f48e68f08dd7036038b32f63db99"} Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.175806 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b40a89d-79b8-4428-99b7-a0d79520e8b8" containerID="dffc576b2a9f3dea8dda6a5e835de5cfc9795ae112e7807c9965766116b99569" exitCode=0 Nov 22 07:42:15 crc kubenswrapper[4853]: I1122 07:42:15.175854 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4p4mm" event={"ID":"1b40a89d-79b8-4428-99b7-a0d79520e8b8","Type":"ContainerDied","Data":"dffc576b2a9f3dea8dda6a5e835de5cfc9795ae112e7807c9965766116b99569"} Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.188015 4853 generic.go:334] "Generic (PLEG): container finished" podID="976dce54-751c-4418-9fc8-5ae4340d347f" containerID="848335d0ad529a5c668173cc96d09080dfc7c9290a39d88ea7ef87c0c00c6817" exitCode=0 Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.188082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-caba-account-create-ltq46" event={"ID":"976dce54-751c-4418-9fc8-5ae4340d347f","Type":"ContainerDied","Data":"848335d0ad529a5c668173cc96d09080dfc7c9290a39d88ea7ef87c0c00c6817"} Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.190507 4853 generic.go:334] "Generic (PLEG): container finished" podID="0cb597cd-e80d-468d-8d85-ab34391e70c6" containerID="677ddc2c25334218fd8b8016ea3bc764045d12837a29f5c16ed48b53c2a39fcf" exitCode=0 Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.190667 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lgnfr" event={"ID":"0cb597cd-e80d-468d-8d85-ab34391e70c6","Type":"ContainerDied","Data":"677ddc2c25334218fd8b8016ea3bc764045d12837a29f5c16ed48b53c2a39fcf"} Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.193522 4853 generic.go:334] "Generic (PLEG): container finished" podID="10f50975-476a-4fd0-b6dd-5195dfad3931" containerID="cc8617f03d625b5c1b6962819712d24a356c6e2363c4b3bbf38041fe6dbac4cf" exitCode=0 Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.193721 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dn2k" event={"ID":"10f50975-476a-4fd0-b6dd-5195dfad3931","Type":"ContainerDied","Data":"cc8617f03d625b5c1b6962819712d24a356c6e2363c4b3bbf38041fe6dbac4cf"} Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.840176 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.858846 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jt4hx" Nov 22 07:42:16 crc kubenswrapper[4853]: I1122 07:42:16.871687 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4p4mm" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.035710 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thpn8\" (UniqueName: \"kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8\") pod \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.035792 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts\") pod \"edece509-f388-43e4-b8e8-c6bce0659954\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.035941 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts\") pod \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\" (UID: \"1b40a89d-79b8-4428-99b7-a0d79520e8b8\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.036075 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvbct\" (UniqueName: \"kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct\") pod \"edece509-f388-43e4-b8e8-c6bce0659954\" (UID: \"edece509-f388-43e4-b8e8-c6bce0659954\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.036201 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v9f4\" (UniqueName: \"kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4\") pod \"8b2e23ab-b228-4a69-866d-f16a8d51966a\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.036277 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts\") pod \"8b2e23ab-b228-4a69-866d-f16a8d51966a\" (UID: \"8b2e23ab-b228-4a69-866d-f16a8d51966a\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.036497 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edece509-f388-43e4-b8e8-c6bce0659954" (UID: "edece509-f388-43e4-b8e8-c6bce0659954"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.036817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b40a89d-79b8-4428-99b7-a0d79520e8b8" (UID: "1b40a89d-79b8-4428-99b7-a0d79520e8b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.037081 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edece509-f388-43e4-b8e8-c6bce0659954-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.037130 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b40a89d-79b8-4428-99b7-a0d79520e8b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.037177 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b2e23ab-b228-4a69-866d-f16a8d51966a" (UID: "8b2e23ab-b228-4a69-866d-f16a8d51966a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.043818 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct" (OuterVolumeSpecName: "kube-api-access-qvbct") pod "edece509-f388-43e4-b8e8-c6bce0659954" (UID: "edece509-f388-43e4-b8e8-c6bce0659954"). InnerVolumeSpecName "kube-api-access-qvbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.043928 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8" (OuterVolumeSpecName: "kube-api-access-thpn8") pod "1b40a89d-79b8-4428-99b7-a0d79520e8b8" (UID: "1b40a89d-79b8-4428-99b7-a0d79520e8b8"). InnerVolumeSpecName "kube-api-access-thpn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.043977 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4" (OuterVolumeSpecName: "kube-api-access-9v9f4") pod "8b2e23ab-b228-4a69-866d-f16a8d51966a" (UID: "8b2e23ab-b228-4a69-866d-f16a8d51966a"). InnerVolumeSpecName "kube-api-access-9v9f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.140060 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvbct\" (UniqueName: \"kubernetes.io/projected/edece509-f388-43e4-b8e8-c6bce0659954-kube-api-access-qvbct\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.140120 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v9f4\" (UniqueName: \"kubernetes.io/projected/8b2e23ab-b228-4a69-866d-f16a8d51966a-kube-api-access-9v9f4\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.140131 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b2e23ab-b228-4a69-866d-f16a8d51966a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.140141 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thpn8\" (UniqueName: \"kubernetes.io/projected/1b40a89d-79b8-4428-99b7-a0d79520e8b8-kube-api-access-thpn8\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.209345 4853 generic.go:334] "Generic (PLEG): container finished" podID="cb6698be-a947-4b69-9312-cd3382abefe9" containerID="915327cc8865341dc97386fd7f4ebeb4cea536bf7051d5ea872199c547bc5844" exitCode=0 Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.209431 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48f1-account-create-vlh5d" event={"ID":"cb6698be-a947-4b69-9312-cd3382abefe9","Type":"ContainerDied","Data":"915327cc8865341dc97386fd7f4ebeb4cea536bf7051d5ea872199c547bc5844"} Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.213212 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4p4mm" event={"ID":"1b40a89d-79b8-4428-99b7-a0d79520e8b8","Type":"ContainerDied","Data":"c382d0452933cc8be2226fda691c58065be74f2e8d50ad44f1af7a11c24d39f0"} Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.213244 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4p4mm" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.213268 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c382d0452933cc8be2226fda691c58065be74f2e8d50ad44f1af7a11c24d39f0" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.215833 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jt4hx" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.216022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jt4hx" event={"ID":"8b2e23ab-b228-4a69-866d-f16a8d51966a","Type":"ContainerDied","Data":"7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5"} Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.216102 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa2e3a87153954d8243ae2e0b740d44f7fbff029e526a99bd31a8f5e1e002d5" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.218400 4853 generic.go:334] "Generic (PLEG): container finished" podID="c662e6b6-1204-4d05-9b6d-b1d0c9afc613" containerID="915d440089db73eb2d99883ce7e639d4b34362febadb1b2dcadd7f233f724afc" exitCode=0 Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.218461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80dd-account-create-4fgwn" event={"ID":"c662e6b6-1204-4d05-9b6d-b1d0c9afc613","Type":"ContainerDied","Data":"915d440089db73eb2d99883ce7e639d4b34362febadb1b2dcadd7f233f724afc"} Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.222436 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3943-account-create-4vld8" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.222484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3943-account-create-4vld8" event={"ID":"edece509-f388-43e4-b8e8-c6bce0659954","Type":"ContainerDied","Data":"9a4009ba19db3fb8c6474948ba6e7bd4d45cb35e6af6a98fd6ca86e8fb2da21e"} Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.222536 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4009ba19db3fb8c6474948ba6e7bd4d45cb35e6af6a98fd6ca86e8fb2da21e" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.797022 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dn2k" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.911297 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.917530 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lgnfr" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.963794 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpdw4\" (UniqueName: \"kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4\") pod \"10f50975-476a-4fd0-b6dd-5195dfad3931\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.964349 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts\") pod \"10f50975-476a-4fd0-b6dd-5195dfad3931\" (UID: \"10f50975-476a-4fd0-b6dd-5195dfad3931\") " Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.965219 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10f50975-476a-4fd0-b6dd-5195dfad3931" (UID: "10f50975-476a-4fd0-b6dd-5195dfad3931"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:17 crc kubenswrapper[4853]: I1122 07:42:17.972049 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4" (OuterVolumeSpecName: "kube-api-access-qpdw4") pod "10f50975-476a-4fd0-b6dd-5195dfad3931" (UID: "10f50975-476a-4fd0-b6dd-5195dfad3931"). InnerVolumeSpecName "kube-api-access-qpdw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.066698 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts\") pod \"976dce54-751c-4418-9fc8-5ae4340d347f\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.067441 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "976dce54-751c-4418-9fc8-5ae4340d347f" (UID: "976dce54-751c-4418-9fc8-5ae4340d347f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.067621 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gzkq\" (UniqueName: \"kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq\") pod \"0cb597cd-e80d-468d-8d85-ab34391e70c6\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.067759 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts\") pod \"0cb597cd-e80d-468d-8d85-ab34391e70c6\" (UID: \"0cb597cd-e80d-468d-8d85-ab34391e70c6\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.067892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89wmj\" (UniqueName: \"kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj\") pod \"976dce54-751c-4418-9fc8-5ae4340d347f\" (UID: \"976dce54-751c-4418-9fc8-5ae4340d347f\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.068460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0cb597cd-e80d-468d-8d85-ab34391e70c6" (UID: "0cb597cd-e80d-468d-8d85-ab34391e70c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.068641 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/976dce54-751c-4418-9fc8-5ae4340d347f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.068766 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10f50975-476a-4fd0-b6dd-5195dfad3931-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.068851 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpdw4\" (UniqueName: \"kubernetes.io/projected/10f50975-476a-4fd0-b6dd-5195dfad3931-kube-api-access-qpdw4\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.072565 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj" (OuterVolumeSpecName: "kube-api-access-89wmj") pod "976dce54-751c-4418-9fc8-5ae4340d347f" (UID: "976dce54-751c-4418-9fc8-5ae4340d347f"). InnerVolumeSpecName "kube-api-access-89wmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.073761 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq" (OuterVolumeSpecName: "kube-api-access-7gzkq") pod "0cb597cd-e80d-468d-8d85-ab34391e70c6" (UID: "0cb597cd-e80d-468d-8d85-ab34391e70c6"). InnerVolumeSpecName "kube-api-access-7gzkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.170803 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gzkq\" (UniqueName: \"kubernetes.io/projected/0cb597cd-e80d-468d-8d85-ab34391e70c6-kube-api-access-7gzkq\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.170861 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cb597cd-e80d-468d-8d85-ab34391e70c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.170873 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89wmj\" (UniqueName: \"kubernetes.io/projected/976dce54-751c-4418-9fc8-5ae4340d347f-kube-api-access-89wmj\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.235461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-caba-account-create-ltq46" event={"ID":"976dce54-751c-4418-9fc8-5ae4340d347f","Type":"ContainerDied","Data":"2541941ab9daac6b9788ad3b98f4e02130c5be788ec3e1e1d9f76e136b7bed99"} Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.235796 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2541941ab9daac6b9788ad3b98f4e02130c5be788ec3e1e1d9f76e136b7bed99" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.235550 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-caba-account-create-ltq46" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.239663 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-lgnfr" event={"ID":"0cb597cd-e80d-468d-8d85-ab34391e70c6","Type":"ContainerDied","Data":"5a6348be9669a76de7c393d2d5165a3abd073c97442e59dc8583b61bfdec961f"} Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.239712 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a6348be9669a76de7c393d2d5165a3abd073c97442e59dc8583b61bfdec961f" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.239991 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-lgnfr" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.241925 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7dn2k" event={"ID":"10f50975-476a-4fd0-b6dd-5195dfad3931","Type":"ContainerDied","Data":"87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15"} Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.241963 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87210bc144eab0ec3d54e5aafe8647e9d650fa0fe5e7a5efe1aff552a9697d15" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.242139 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7dn2k" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.840854 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.850253 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.889312 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts\") pod \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.889621 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k59s8\" (UniqueName: \"kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8\") pod \"cb6698be-a947-4b69-9312-cd3382abefe9\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.889775 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts\") pod \"cb6698be-a947-4b69-9312-cd3382abefe9\" (UID: \"cb6698be-a947-4b69-9312-cd3382abefe9\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.889806 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgfcz\" (UniqueName: \"kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz\") pod \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\" (UID: \"c662e6b6-1204-4d05-9b6d-b1d0c9afc613\") " Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.895442 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb6698be-a947-4b69-9312-cd3382abefe9" (UID: "cb6698be-a947-4b69-9312-cd3382abefe9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.895880 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c662e6b6-1204-4d05-9b6d-b1d0c9afc613" (UID: "c662e6b6-1204-4d05-9b6d-b1d0c9afc613"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.901800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8" (OuterVolumeSpecName: "kube-api-access-k59s8") pod "cb6698be-a947-4b69-9312-cd3382abefe9" (UID: "cb6698be-a947-4b69-9312-cd3382abefe9"). InnerVolumeSpecName "kube-api-access-k59s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.903686 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz" (OuterVolumeSpecName: "kube-api-access-bgfcz") pod "c662e6b6-1204-4d05-9b6d-b1d0c9afc613" (UID: "c662e6b6-1204-4d05-9b6d-b1d0c9afc613"). InnerVolumeSpecName "kube-api-access-bgfcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.992826 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb6698be-a947-4b69-9312-cd3382abefe9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.993417 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgfcz\" (UniqueName: \"kubernetes.io/projected/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-kube-api-access-bgfcz\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.993431 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c662e6b6-1204-4d05-9b6d-b1d0c9afc613-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:18 crc kubenswrapper[4853]: I1122 07:42:18.993441 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k59s8\" (UniqueName: \"kubernetes.io/projected/cb6698be-a947-4b69-9312-cd3382abefe9-kube-api-access-k59s8\") on node \"crc\" DevicePath \"\"" Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.256461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-48f1-account-create-vlh5d" event={"ID":"cb6698be-a947-4b69-9312-cd3382abefe9","Type":"ContainerDied","Data":"a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f"} Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.256532 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9a537f0687e79eafbd12f2da828ab35995afae505edea98095519a8c00e800f" Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.256539 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-48f1-account-create-vlh5d" Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.261132 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-80dd-account-create-4fgwn" event={"ID":"c662e6b6-1204-4d05-9b6d-b1d0c9afc613","Type":"ContainerDied","Data":"23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd"} Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.261167 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23707bd68d576826984674055f3b6e1857eb34a3a6ae0b7201922d4982b745bd" Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.261273 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-80dd-account-create-4fgwn" Nov 22 07:42:19 crc kubenswrapper[4853]: I1122 07:42:19.748485 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:42:19 crc kubenswrapper[4853]: E1122 07:42:19.749285 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:42:28 crc kubenswrapper[4853]: I1122 07:42:28.364002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2vhq9" event={"ID":"937b4e80-b6f5-4e62-8053-05ce38b1b105","Type":"ContainerStarted","Data":"af05b69ba1759fb5694a74d89b42cb95012e46c016f6361b98b3f3c9c5d64838"} Nov 22 07:42:29 crc kubenswrapper[4853]: I1122 07:42:29.398595 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2vhq9" podStartSLOduration=3.986073156 podStartE2EDuration="59.398555105s" podCreationTimestamp="2025-11-22 07:41:30 +0000 UTC" firstStartedPulling="2025-11-22 07:41:32.425898824 +0000 UTC m=+1891.266521450" lastFinishedPulling="2025-11-22 07:42:27.838380773 +0000 UTC m=+1946.679003399" observedRunningTime="2025-11-22 07:42:29.394920837 +0000 UTC m=+1948.235543463" watchObservedRunningTime="2025-11-22 07:42:29.398555105 +0000 UTC m=+1948.239177731" Nov 22 07:42:32 crc kubenswrapper[4853]: I1122 07:42:32.748696 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:42:32 crc kubenswrapper[4853]: E1122 07:42:32.749973 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:42:43 crc kubenswrapper[4853]: I1122 07:42:43.749690 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:42:43 crc kubenswrapper[4853]: E1122 07:42:43.751040 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:42:57 crc kubenswrapper[4853]: I1122 07:42:57.749575 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:42:57 crc kubenswrapper[4853]: E1122 07:42:57.751022 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:43:10 crc kubenswrapper[4853]: I1122 07:43:10.899207 4853 generic.go:334] "Generic (PLEG): container finished" podID="937b4e80-b6f5-4e62-8053-05ce38b1b105" containerID="af05b69ba1759fb5694a74d89b42cb95012e46c016f6361b98b3f3c9c5d64838" exitCode=0 Nov 22 07:43:10 crc kubenswrapper[4853]: I1122 07:43:10.899338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2vhq9" event={"ID":"937b4e80-b6f5-4e62-8053-05ce38b1b105","Type":"ContainerDied","Data":"af05b69ba1759fb5694a74d89b42cb95012e46c016f6361b98b3f3c9c5d64838"} Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.316626 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.408670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkgqz\" (UniqueName: \"kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz\") pod \"937b4e80-b6f5-4e62-8053-05ce38b1b105\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.409034 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data\") pod \"937b4e80-b6f5-4e62-8053-05ce38b1b105\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.409288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle\") pod \"937b4e80-b6f5-4e62-8053-05ce38b1b105\" (UID: \"937b4e80-b6f5-4e62-8053-05ce38b1b105\") " Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.417561 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz" (OuterVolumeSpecName: "kube-api-access-vkgqz") pod "937b4e80-b6f5-4e62-8053-05ce38b1b105" (UID: "937b4e80-b6f5-4e62-8053-05ce38b1b105"). InnerVolumeSpecName "kube-api-access-vkgqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.446062 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "937b4e80-b6f5-4e62-8053-05ce38b1b105" (UID: "937b4e80-b6f5-4e62-8053-05ce38b1b105"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.477320 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data" (OuterVolumeSpecName: "config-data") pod "937b4e80-b6f5-4e62-8053-05ce38b1b105" (UID: "937b4e80-b6f5-4e62-8053-05ce38b1b105"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.512734 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.512804 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkgqz\" (UniqueName: \"kubernetes.io/projected/937b4e80-b6f5-4e62-8053-05ce38b1b105-kube-api-access-vkgqz\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.512821 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937b4e80-b6f5-4e62-8053-05ce38b1b105-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.748585 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:43:12 crc kubenswrapper[4853]: E1122 07:43:12.749043 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.923482 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2vhq9" event={"ID":"937b4e80-b6f5-4e62-8053-05ce38b1b105","Type":"ContainerDied","Data":"e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf"} Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.923536 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e167c32e04e54bba181e678bd38109cb9c410116a215963c9d21ccbb07024bcf" Nov 22 07:43:12 crc kubenswrapper[4853]: I1122 07:43:12.923920 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2vhq9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.213477 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214124 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f50975-476a-4fd0-b6dd-5195dfad3931" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214157 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f50975-476a-4fd0-b6dd-5195dfad3931" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214183 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b40a89d-79b8-4428-99b7-a0d79520e8b8" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214191 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b40a89d-79b8-4428-99b7-a0d79520e8b8" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214205 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edece509-f388-43e4-b8e8-c6bce0659954" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214213 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="edece509-f388-43e4-b8e8-c6bce0659954" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214227 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" containerName="keystone-db-sync" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214235 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" containerName="keystone-db-sync" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214252 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="976dce54-751c-4418-9fc8-5ae4340d347f" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214259 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="976dce54-751c-4418-9fc8-5ae4340d347f" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214270 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2e23ab-b228-4a69-866d-f16a8d51966a" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214277 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2e23ab-b228-4a69-866d-f16a8d51966a" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214295 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb6698be-a947-4b69-9312-cd3382abefe9" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214301 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb6698be-a947-4b69-9312-cd3382abefe9" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214312 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cb597cd-e80d-468d-8d85-ab34391e70c6" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214320 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cb597cd-e80d-468d-8d85-ab34391e70c6" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: E1122 07:43:13.214332 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c662e6b6-1204-4d05-9b6d-b1d0c9afc613" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214339 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c662e6b6-1204-4d05-9b6d-b1d0c9afc613" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214603 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2e23ab-b228-4a69-866d-f16a8d51966a" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214621 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f50975-476a-4fd0-b6dd-5195dfad3931" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214639 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="976dce54-751c-4418-9fc8-5ae4340d347f" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214655 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="edece509-f388-43e4-b8e8-c6bce0659954" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214668 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" containerName="keystone-db-sync" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214682 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c662e6b6-1204-4d05-9b6d-b1d0c9afc613" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214691 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b40a89d-79b8-4428-99b7-a0d79520e8b8" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214705 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb6698be-a947-4b69-9312-cd3382abefe9" containerName="mariadb-account-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.214718 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cb597cd-e80d-468d-8d85-ab34391e70c6" containerName="mariadb-database-create" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.216802 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.236962 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.338635 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.338850 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.338977 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.339007 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xmp6\" (UniqueName: \"kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.339366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.339393 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.348356 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-zg6t9"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.350573 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.358354 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.358796 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.374048 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.374311 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.378008 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n8jmf" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.382612 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zg6t9"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441392 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441552 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqnjl\" (UniqueName: \"kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441623 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441673 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441762 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441782 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xmp6\" (UniqueName: \"kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.441804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.445312 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.447520 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.448308 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.451374 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.454810 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.471400 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xmp6\" (UniqueName: \"kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6\") pod \"dnsmasq-dns-6f8c45789f-5fmx2\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.544790 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.548323 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.548430 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.548634 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.548734 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.548853 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqnjl\" (UniqueName: \"kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.549056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.554728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.562788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.576266 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.578881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.580264 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.621537 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqnjl\" (UniqueName: \"kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl\") pod \"keystone-bootstrap-zg6t9\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.669058 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-7xksh"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.671027 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.678277 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-htbfq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.678496 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.703571 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.717985 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7xksh"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.753530 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.753601 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.753640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzw9z\" (UniqueName: \"kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.769428 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nnfsq"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.800431 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nnfsq"] Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.800560 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.818507 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.821419 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lmnjv" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.830148 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.882566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.882610 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s887s\" (UniqueName: \"kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.882673 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.882709 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzw9z\" (UniqueName: \"kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.895098 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.895656 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.900313 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.900395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.900602 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.900769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.929692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:13 crc kubenswrapper[4853]: I1122 07:43:13.939020 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.017641 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s887s\" (UniqueName: \"kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.017952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.018012 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.018105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.018198 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.018299 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.020354 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzw9z\" (UniqueName: \"kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z\") pod \"heat-db-sync-7xksh\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.032686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.039484 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.054687 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.070962 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.071619 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7xksh" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.082457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.093832 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-dzsj4"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.095950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.102532 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s887s\" (UniqueName: \"kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s\") pod \"cinder-db-sync-nnfsq\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.124867 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.142666 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.153097 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4htd6"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.156707 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.169479 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.169686 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kx9vl" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.181048 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.181106 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.181060 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-65bpr" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.183266 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.223150 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226366 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x7k2\" (UniqueName: \"kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226536 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhz5n\" (UniqueName: \"kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226689 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226778 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226803 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226897 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.226925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcww4\" (UniqueName: \"kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.227685 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dzsj4"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.273255 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4htd6"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.348715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.348914 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.349644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x7k2\" (UniqueName: \"kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.349783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.349814 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.350302 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.350618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhz5n\" (UniqueName: \"kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.351411 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.352009 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.358906 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.359020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.374635 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.383106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.387250 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.388297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.391598 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.396729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.396962 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcww4\" (UniqueName: \"kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.399016 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.401540 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qdbdm"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.423391 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x7k2\" (UniqueName: \"kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.434321 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.439001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.439813 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.440382 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2tc6t" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.442167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhz5n\" (UniqueName: \"kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n\") pod \"neutron-db-sync-4htd6\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.443229 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.448538 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcww4\" (UniqueName: \"kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4\") pod \"dnsmasq-dns-fcfdd6f9f-pjzkf\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.462461 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle\") pod \"barbican-db-sync-dzsj4\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.521709 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qdbdm"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.544721 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.623153 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.623234 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.623330 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.623378 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.623410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.658020 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.676442 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.681678 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.699346 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.713937 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.716963 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.742664 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.742831 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.742893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.743008 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.743104 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.746063 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.748764 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4htd6" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.753704 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.765661 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.774475 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.780977 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj\") pod \"placement-db-sync-qdbdm\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.791074 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.848784 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdbdm" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863637 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863730 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863776 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863842 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863963 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.863983 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.947326 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-zg6t9"] Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.972394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.972605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.973435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.973528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.973716 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.973815 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.974154 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.975618 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.989637 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.992652 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.994592 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.996904 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:14 crc kubenswrapper[4853]: I1122 07:43:14.997885 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.010491 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr\") pod \"ceilometer-0\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " pod="openstack/ceilometer-0" Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.113963 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-7xksh"] Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.170444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.187191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" event={"ID":"c783b7ee-b794-473f-a4f8-cbb907c89e3d","Type":"ContainerStarted","Data":"fa02b818ab0a4690cd4c05c153eb563bcbf0003f9be55bf20fbc4411968e8c98"} Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.198531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zg6t9" event={"ID":"ea831189-252a-49b8-820a-e366450efa38","Type":"ContainerStarted","Data":"2fd1461cc07820bb30cbe84b458ec02df01462c4c2f6388b644241439a02a481"} Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.337315 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nnfsq"] Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.480813 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-dzsj4"] Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.503446 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:43:15 crc kubenswrapper[4853]: W1122 07:43:15.507027 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod213f3a9e_0f60_423e_90d6_cbb193eadff1.slice/crio-48d69383c9e7e419401b53fe50e14e24882e516e074cc8ed423654913651ebab WatchSource:0}: Error finding container 48d69383c9e7e419401b53fe50e14e24882e516e074cc8ed423654913651ebab: Status 404 returned error can't find the container with id 48d69383c9e7e419401b53fe50e14e24882e516e074cc8ed423654913651ebab Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.648770 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4htd6"] Nov 22 07:43:15 crc kubenswrapper[4853]: I1122 07:43:15.848421 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qdbdm"] Nov 22 07:43:16 crc kubenswrapper[4853]: W1122 07:43:16.019095 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f48ed7e_dbb8_4588_9cb7_4f0850757027.slice/crio-60e08ffb22d6eab2ab75226408cb6488af7c0930422a785dfa02b1e19b6860a0 WatchSource:0}: Error finding container 60e08ffb22d6eab2ab75226408cb6488af7c0930422a785dfa02b1e19b6860a0: Status 404 returned error can't find the container with id 60e08ffb22d6eab2ab75226408cb6488af7c0930422a785dfa02b1e19b6860a0 Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.140786 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.256960 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnfsq" event={"ID":"29d503fd-37f2-453c-aba9-5d2fb2c6aad0","Type":"ContainerStarted","Data":"1b6c52b8644e6c1d011e16f74cecda19da93422401ceaf90e46d7ac2ffb7d1de"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.259268 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerStarted","Data":"60e08ffb22d6eab2ab75226408cb6488af7c0930422a785dfa02b1e19b6860a0"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.260362 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" event={"ID":"213f3a9e-0f60-423e-90d6-cbb193eadff1","Type":"ContainerStarted","Data":"48d69383c9e7e419401b53fe50e14e24882e516e074cc8ed423654913651ebab"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.261293 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdbdm" event={"ID":"f1598c90-266c-4607-b491-e9927d76469c","Type":"ContainerStarted","Data":"267f7013f4c79ac236265a8dfca7972a2e51eb59f89c6b638d25656a9f3236d4"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.262241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4htd6" event={"ID":"297f89ac-14c3-4918-bd7e-776cc229298c","Type":"ContainerStarted","Data":"b226a6f48db6a0762dd8b953a6cf55169576efbe3cae00cd29b090b25d53e196"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.263134 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dzsj4" event={"ID":"289fadd4-7721-4d8e-b33e-35606c18eedb","Type":"ContainerStarted","Data":"adb193d85390461d2f5fdd9eaba68b2a61a648931ae5cdb4ce66623a69685122"} Nov 22 07:43:16 crc kubenswrapper[4853]: I1122 07:43:16.264050 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7xksh" event={"ID":"5a08a523-61a0-4155-b389-0491bcd97e84","Type":"ContainerStarted","Data":"067617a9b0bee0fa201dda123132256bd1cf576df75f879c9d0a0ec2ea823094"} Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.154566 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.297848 4853 generic.go:334] "Generic (PLEG): container finished" podID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerID="42f91e7e7d8f07179170490f1069159e786b089a778b0cc6aa690f1a7b731b91" exitCode=0 Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.298019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" event={"ID":"213f3a9e-0f60-423e-90d6-cbb193eadff1","Type":"ContainerDied","Data":"42f91e7e7d8f07179170490f1069159e786b089a778b0cc6aa690f1a7b731b91"} Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.313125 4853 generic.go:334] "Generic (PLEG): container finished" podID="c783b7ee-b794-473f-a4f8-cbb907c89e3d" containerID="5e6161d099dc45a3815b0745d9ebb3ec7ae97a91553ac79aa1e3e92acfaa00fe" exitCode=0 Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.313311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" event={"ID":"c783b7ee-b794-473f-a4f8-cbb907c89e3d","Type":"ContainerDied","Data":"5e6161d099dc45a3815b0745d9ebb3ec7ae97a91553ac79aa1e3e92acfaa00fe"} Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.331114 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zg6t9" event={"ID":"ea831189-252a-49b8-820a-e366450efa38","Type":"ContainerStarted","Data":"2833634a0e6caed565042c3df0b12b4a476d0c0850b583d50fdc424f26c80a64"} Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.341116 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4htd6" event={"ID":"297f89ac-14c3-4918-bd7e-776cc229298c","Type":"ContainerStarted","Data":"a4fce85f953a48f363537a181cfb1a4384fb876c100e2e32d58fa35ad92b866b"} Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.415428 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4htd6" podStartSLOduration=4.415395823 podStartE2EDuration="4.415395823s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:17.395683587 +0000 UTC m=+1996.236306203" watchObservedRunningTime="2025-11-22 07:43:17.415395823 +0000 UTC m=+1996.256018449" Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.441885 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-zg6t9" podStartSLOduration=4.441856043 podStartE2EDuration="4.441856043s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:17.422013774 +0000 UTC m=+1996.262636400" watchObservedRunningTime="2025-11-22 07:43:17.441856043 +0000 UTC m=+1996.282478679" Nov 22 07:43:17 crc kubenswrapper[4853]: I1122 07:43:17.944430 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.026892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.026985 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.027091 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.027182 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.027279 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.027852 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xmp6\" (UniqueName: \"kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6\") pod \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\" (UID: \"c783b7ee-b794-473f-a4f8-cbb907c89e3d\") " Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.039061 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6" (OuterVolumeSpecName: "kube-api-access-4xmp6") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "kube-api-access-4xmp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.064993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.075273 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config" (OuterVolumeSpecName: "config") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.081157 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.094501 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.098633 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c783b7ee-b794-473f-a4f8-cbb907c89e3d" (UID: "c783b7ee-b794-473f-a4f8-cbb907c89e3d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131238 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xmp6\" (UniqueName: \"kubernetes.io/projected/c783b7ee-b794-473f-a4f8-cbb907c89e3d-kube-api-access-4xmp6\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131288 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131301 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131317 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131329 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.131345 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c783b7ee-b794-473f-a4f8-cbb907c89e3d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.389151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" event={"ID":"213f3a9e-0f60-423e-90d6-cbb193eadff1","Type":"ContainerStarted","Data":"a1cb90a840df819706fee51e51ca74e6c012ba179d220834b26254da3d5cbfb8"} Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.389333 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.394150 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" event={"ID":"c783b7ee-b794-473f-a4f8-cbb907c89e3d","Type":"ContainerDied","Data":"fa02b818ab0a4690cd4c05c153eb563bcbf0003f9be55bf20fbc4411968e8c98"} Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.394183 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-5fmx2" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.394335 4853 scope.go:117] "RemoveContainer" containerID="5e6161d099dc45a3815b0745d9ebb3ec7ae97a91553ac79aa1e3e92acfaa00fe" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.429654 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" podStartSLOduration=5.429624475 podStartE2EDuration="5.429624475s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:43:18.418693237 +0000 UTC m=+1997.259315883" watchObservedRunningTime="2025-11-22 07:43:18.429624475 +0000 UTC m=+1997.270247101" Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.560425 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:18 crc kubenswrapper[4853]: I1122 07:43:18.603933 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-5fmx2"] Nov 22 07:43:19 crc kubenswrapper[4853]: I1122 07:43:19.775270 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c783b7ee-b794-473f-a4f8-cbb907c89e3d" path="/var/lib/kubelet/pods/c783b7ee-b794-473f-a4f8-cbb907c89e3d/volumes" Nov 22 07:43:24 crc kubenswrapper[4853]: I1122 07:43:24.679985 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:43:24 crc kubenswrapper[4853]: I1122 07:43:24.786213 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:43:24 crc kubenswrapper[4853]: I1122 07:43:24.786557 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" containerID="cri-o://9292e7ea65db7971a45b24ccd7c1893f6da9594a6c251c8a86eb64875ec79a7b" gracePeriod=10 Nov 22 07:43:25 crc kubenswrapper[4853]: I1122 07:43:25.575837 4853 generic.go:334] "Generic (PLEG): container finished" podID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerID="9292e7ea65db7971a45b24ccd7c1893f6da9594a6c251c8a86eb64875ec79a7b" exitCode=0 Nov 22 07:43:25 crc kubenswrapper[4853]: I1122 07:43:25.576060 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerDied","Data":"9292e7ea65db7971a45b24ccd7c1893f6da9594a6c251c8a86eb64875ec79a7b"} Nov 22 07:43:25 crc kubenswrapper[4853]: I1122 07:43:25.760102 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:43:25 crc kubenswrapper[4853]: E1122 07:43:25.760648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:43:26 crc kubenswrapper[4853]: I1122 07:43:26.441086 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Nov 22 07:43:31 crc kubenswrapper[4853]: I1122 07:43:31.439708 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Nov 22 07:43:35 crc kubenswrapper[4853]: E1122 07:43:35.386998 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 22 07:43:35 crc kubenswrapper[4853]: E1122 07:43:35.387529 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x7k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dzsj4_openstack(289fadd4-7721-4d8e-b33e-35606c18eedb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:43:35 crc kubenswrapper[4853]: E1122 07:43:35.389029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dzsj4" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" Nov 22 07:43:35 crc kubenswrapper[4853]: E1122 07:43:35.696207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-dzsj4" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" Nov 22 07:43:36 crc kubenswrapper[4853]: I1122 07:43:36.440440 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Nov 22 07:43:36 crc kubenswrapper[4853]: I1122 07:43:36.441926 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:43:39 crc kubenswrapper[4853]: I1122 07:43:39.749099 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:43:40 crc kubenswrapper[4853]: E1122 07:43:40.451854 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 22 07:43:40 crc kubenswrapper[4853]: E1122 07:43:40.452449 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qdbdm_openstack(f1598c90-266c-4607-b491-e9927d76469c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:43:40 crc kubenswrapper[4853]: E1122 07:43:40.453725 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qdbdm" podUID="f1598c90-266c-4607-b491-e9927d76469c" Nov 22 07:43:40 crc kubenswrapper[4853]: E1122 07:43:40.765590 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-qdbdm" podUID="f1598c90-266c-4607-b491-e9927d76469c" Nov 22 07:43:41 crc kubenswrapper[4853]: I1122 07:43:41.440483 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Nov 22 07:43:46 crc kubenswrapper[4853]: I1122 07:43:46.440073 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Nov 22 07:43:53 crc kubenswrapper[4853]: E1122 07:43:53.278099 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 22 07:43:53 crc kubenswrapper[4853]: E1122 07:43:53.279273 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzw9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7xksh_openstack(5a08a523-61a0-4155-b389-0491bcd97e84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:43:53 crc kubenswrapper[4853]: E1122 07:43:53.281207 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7xksh" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" Nov 22 07:43:53 crc kubenswrapper[4853]: E1122 07:43:53.957374 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-7xksh" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" Nov 22 07:43:56 crc kubenswrapper[4853]: I1122 07:43:56.440830 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:01 crc kubenswrapper[4853]: I1122 07:44:01.442593 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:06 crc kubenswrapper[4853]: I1122 07:44:06.444185 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:09 crc kubenswrapper[4853]: I1122 07:44:09.280579 4853 generic.go:334] "Generic (PLEG): container finished" podID="ea831189-252a-49b8-820a-e366450efa38" containerID="2833634a0e6caed565042c3df0b12b4a476d0c0850b583d50fdc424f26c80a64" exitCode=0 Nov 22 07:44:09 crc kubenswrapper[4853]: I1122 07:44:09.280687 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zg6t9" event={"ID":"ea831189-252a-49b8-820a-e366450efa38","Type":"ContainerDied","Data":"2833634a0e6caed565042c3df0b12b4a476d0c0850b583d50fdc424f26c80a64"} Nov 22 07:44:11 crc kubenswrapper[4853]: I1122 07:44:11.445172 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:16 crc kubenswrapper[4853]: I1122 07:44:16.445843 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.638691 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.647442 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678160 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqnjl\" (UniqueName: \"kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678263 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678308 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678432 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjpjf\" (UniqueName: \"kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678499 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678577 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0\") pod \"6de9b7c8-6d38-4338-9a33-0084a0981c40\" (UID: \"6de9b7c8-6d38-4338-9a33-0084a0981c40\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678674 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678766 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678816 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.678948 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys\") pod \"ea831189-252a-49b8-820a-e366450efa38\" (UID: \"ea831189-252a-49b8-820a-e366450efa38\") " Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.689172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf" (OuterVolumeSpecName: "kube-api-access-pjpjf") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "kube-api-access-pjpjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.690522 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts" (OuterVolumeSpecName: "scripts") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.695698 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl" (OuterVolumeSpecName: "kube-api-access-kqnjl") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "kube-api-access-kqnjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.703232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.746612 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.765108 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data" (OuterVolumeSpecName: "config-data") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.766244 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.769220 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea831189-252a-49b8-820a-e366450efa38" (UID: "ea831189-252a-49b8-820a-e366450efa38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.776411 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config" (OuterVolumeSpecName: "config") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783237 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783273 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783283 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783296 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqnjl\" (UniqueName: \"kubernetes.io/projected/ea831189-252a-49b8-820a-e366450efa38-kube-api-access-kqnjl\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783308 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783319 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjpjf\" (UniqueName: \"kubernetes.io/projected/6de9b7c8-6d38-4338-9a33-0084a0981c40-kube-api-access-pjpjf\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783329 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783339 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.783348 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea831189-252a-49b8-820a-e366450efa38-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.793710 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.820164 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.820848 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6de9b7c8-6d38-4338-9a33-0084a0981c40" (UID: "6de9b7c8-6d38-4338-9a33-0084a0981c40"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.886079 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.886125 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:19 crc kubenswrapper[4853]: I1122 07:44:19.886135 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6de9b7c8-6d38-4338-9a33-0084a0981c40-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.431860 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.431842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" event={"ID":"6de9b7c8-6d38-4338-9a33-0084a0981c40","Type":"ContainerDied","Data":"48c989ab55d52d6cabd67b8e8fbd292418e528da8c926e43c114e97ba0172a04"} Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.432048 4853 scope.go:117] "RemoveContainer" containerID="9292e7ea65db7971a45b24ccd7c1893f6da9594a6c251c8a86eb64875ec79a7b" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.434393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-zg6t9" event={"ID":"ea831189-252a-49b8-820a-e366450efa38","Type":"ContainerDied","Data":"2fd1461cc07820bb30cbe84b458ec02df01462c4c2f6388b644241439a02a481"} Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.434439 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd1461cc07820bb30cbe84b458ec02df01462c4c2f6388b644241439a02a481" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.434515 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-zg6t9" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.465365 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.480051 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-mw24j"] Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.792889 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-zg6t9"] Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.821410 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-zg6t9"] Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.928984 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mnxvk"] Nov 22 07:44:20 crc kubenswrapper[4853]: E1122 07:44:20.929682 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.929708 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" Nov 22 07:44:20 crc kubenswrapper[4853]: E1122 07:44:20.929735 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c783b7ee-b794-473f-a4f8-cbb907c89e3d" containerName="init" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.929763 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c783b7ee-b794-473f-a4f8-cbb907c89e3d" containerName="init" Nov 22 07:44:20 crc kubenswrapper[4853]: E1122 07:44:20.929801 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea831189-252a-49b8-820a-e366450efa38" containerName="keystone-bootstrap" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.929814 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea831189-252a-49b8-820a-e366450efa38" containerName="keystone-bootstrap" Nov 22 07:44:20 crc kubenswrapper[4853]: E1122 07:44:20.929827 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="init" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.929834 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="init" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.930142 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea831189-252a-49b8-820a-e366450efa38" containerName="keystone-bootstrap" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.930173 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.930186 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c783b7ee-b794-473f-a4f8-cbb907c89e3d" containerName="init" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.931519 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.935130 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-n8jmf" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.935401 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.939135 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.950316 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:44:20 crc kubenswrapper[4853]: I1122 07:44:20.950678 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019167 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019405 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019455 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2hs\" (UniqueName: \"kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.019489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.036368 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mnxvk"] Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.120897 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2hs\" (UniqueName: \"kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.120959 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.121006 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.121024 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.121081 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.121176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.140928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.147256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.147341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.154416 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2hs\" (UniqueName: \"kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.154827 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.167580 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data\") pod \"keystone-bootstrap-mnxvk\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.270444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.447631 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-mw24j" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: i/o timeout" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.764715 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6de9b7c8-6d38-4338-9a33-0084a0981c40" path="/var/lib/kubelet/pods/6de9b7c8-6d38-4338-9a33-0084a0981c40/volumes" Nov 22 07:44:21 crc kubenswrapper[4853]: I1122 07:44:21.765541 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea831189-252a-49b8-820a-e366450efa38" path="/var/lib/kubelet/pods/ea831189-252a-49b8-820a-e366450efa38/volumes" Nov 22 07:44:48 crc kubenswrapper[4853]: E1122 07:44:48.641696 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 22 07:44:48 crc kubenswrapper[4853]: E1122 07:44:48.642563 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s887s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nnfsq_openstack(29d503fd-37f2-453c-aba9-5d2fb2c6aad0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:48 crc kubenswrapper[4853]: E1122 07:44:48.644249 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nnfsq" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" Nov 22 07:44:48 crc kubenswrapper[4853]: E1122 07:44:48.786150 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-nnfsq" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.763660 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.766919 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.775167 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.921133 4853 scope.go:117] "RemoveContainer" containerID="f55c1c7da1dda0ac3167a3901912328243165a0eaf64be85bf35e52772bca7d9" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.934209 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.934511 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8nvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qdbdm_openstack(f1598c90-266c-4607-b491-e9927d76469c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.936036 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qdbdm" podUID="f1598c90-266c-4607-b491-e9927d76469c" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.950517 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.950774 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6x7k2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-dzsj4_openstack(289fadd4-7721-4d8e-b33e-35606c18eedb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.951958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-dzsj4" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.957701 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.957896 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzw9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-7xksh_openstack(5a08a523-61a0-4155-b389-0491bcd97e84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:51 crc kubenswrapper[4853]: E1122 07:44:51.959077 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-7xksh" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.962860 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlw4c\" (UniqueName: \"kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.963029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:51 crc kubenswrapper[4853]: I1122 07:44:51.963089 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.065160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.066061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlw4c\" (UniqueName: \"kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.066086 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.066244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.066798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.089905 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlw4c\" (UniqueName: \"kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c\") pod \"redhat-marketplace-gg4ws\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.389239 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.789659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mnxvk"] Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.826224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mnxvk" event={"ID":"a62df165-8b5f-48a0-823f-91a3517b8082","Type":"ContainerStarted","Data":"89e57426848bfd2bee5ea88fd9ef7442cd143d1823d06002d4eea66b2deadce0"} Nov 22 07:44:52 crc kubenswrapper[4853]: I1122 07:44:52.920438 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:44:53 crc kubenswrapper[4853]: W1122 07:44:53.006232 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dda077c_c7ab_4210_a564_3bd29b2bd762.slice/crio-cb740e7ceeb67f9c5b78d3bd12ddfd6b07dc31e41935f9593f8ceddc32feeee8 WatchSource:0}: Error finding container cb740e7ceeb67f9c5b78d3bd12ddfd6b07dc31e41935f9593f8ceddc32feeee8: Status 404 returned error can't find the container with id cb740e7ceeb67f9c5b78d3bd12ddfd6b07dc31e41935f9593f8ceddc32feeee8 Nov 22 07:44:53 crc kubenswrapper[4853]: I1122 07:44:53.843405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerStarted","Data":"cb740e7ceeb67f9c5b78d3bd12ddfd6b07dc31e41935f9593f8ceddc32feeee8"} Nov 22 07:44:53 crc kubenswrapper[4853]: I1122 07:44:53.847392 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd"} Nov 22 07:44:53 crc kubenswrapper[4853]: E1122 07:44:53.991124 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 22 07:44:53 crc kubenswrapper[4853]: E1122 07:44:53.991380 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6fhfh64h8bh66bhdfh696h647h694h696h646h55dh5dfh54dh66h584h65fh658h9ch57bhd8h665h5c6h55fh75h567h64dh675h656h5f5h577h4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9f48ed7e-dbb8-4588-9cb7-4f0850757027): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:44:54 crc kubenswrapper[4853]: I1122 07:44:54.863293 4853 generic.go:334] "Generic (PLEG): container finished" podID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerID="bf36efdcbfdac26b0cf0952e71b368c7a04b74dfbec40fef776112aae1a3ca46" exitCode=0 Nov 22 07:44:54 crc kubenswrapper[4853]: I1122 07:44:54.863398 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerDied","Data":"bf36efdcbfdac26b0cf0952e71b368c7a04b74dfbec40fef776112aae1a3ca46"} Nov 22 07:44:54 crc kubenswrapper[4853]: I1122 07:44:54.869339 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mnxvk" event={"ID":"a62df165-8b5f-48a0-823f-91a3517b8082","Type":"ContainerStarted","Data":"555ddf966aa4207870cf3c77619e1ded7dc1792697c2a7dce08ae0fc0db92841"} Nov 22 07:44:54 crc kubenswrapper[4853]: I1122 07:44:54.912407 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mnxvk" podStartSLOduration=34.912379736 podStartE2EDuration="34.912379736s" podCreationTimestamp="2025-11-22 07:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:44:54.902283292 +0000 UTC m=+2093.742905938" watchObservedRunningTime="2025-11-22 07:44:54.912379736 +0000 UTC m=+2093.753002352" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.165932 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv"] Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.168888 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.184954 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv"] Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.203947 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.204127 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.305060 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvsj\" (UniqueName: \"kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.305132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.305300 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.408652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvsj\" (UniqueName: \"kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.408733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.408851 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.410068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.440915 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.447660 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvsj\" (UniqueName: \"kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj\") pod \"collect-profiles-29396625-f4jwv\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:00 crc kubenswrapper[4853]: I1122 07:45:00.529212 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:01 crc kubenswrapper[4853]: I1122 07:45:01.265118 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv"] Nov 22 07:45:01 crc kubenswrapper[4853]: I1122 07:45:01.971181 4853 generic.go:334] "Generic (PLEG): container finished" podID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerID="44c96a68be0a5539837dc88db7c68d4a5488cb8e4ef600436f28d298ec98cc49" exitCode=0 Nov 22 07:45:01 crc kubenswrapper[4853]: I1122 07:45:01.973531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerDied","Data":"44c96a68be0a5539837dc88db7c68d4a5488cb8e4ef600436f28d298ec98cc49"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.003193 4853 generic.go:334] "Generic (PLEG): container finished" podID="16ff1621-679a-42ea-af86-4101058daa35" containerID="6a05e5f54293086212872f0f9acd7a9f5ecbab972ff347fe1f4bbb1ae303b9fa" exitCode=0 Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.003376 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" event={"ID":"16ff1621-679a-42ea-af86-4101058daa35","Type":"ContainerDied","Data":"6a05e5f54293086212872f0f9acd7a9f5ecbab972ff347fe1f4bbb1ae303b9fa"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.003445 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" event={"ID":"16ff1621-679a-42ea-af86-4101058daa35","Type":"ContainerStarted","Data":"d08bbe2ac8992e6fe8064438833dc32705e0f450ee9858abdfa8173b32ff75a5"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.023637 4853 generic.go:334] "Generic (PLEG): container finished" podID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" containerID="20af5ec328f6909943bbc0870f254b43a734f0691be83697769a97b8f6d3ddd2" exitCode=0 Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.023731 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-twqq5" event={"ID":"e2dc7c1e-0083-4eab-80f2-eec435f5c97a","Type":"ContainerDied","Data":"20af5ec328f6909943bbc0870f254b43a734f0691be83697769a97b8f6d3ddd2"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.025861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnfsq" event={"ID":"29d503fd-37f2-453c-aba9-5d2fb2c6aad0","Type":"ContainerStarted","Data":"9b54259d55869e27ba8f9e308f53955791c1604ec2b276eeb471d9425fefad38"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.060659 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerStarted","Data":"fba47a045d54f7b183dc408933a4d29f1532d89a46eaae83dd7c2cdb604ffdf3"} Nov 22 07:45:02 crc kubenswrapper[4853]: I1122 07:45:02.131495 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nnfsq" podStartSLOduration=3.766398599 podStartE2EDuration="1m49.131463286s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="2025-11-22 07:43:15.367664258 +0000 UTC m=+1994.208286884" lastFinishedPulling="2025-11-22 07:45:00.732728955 +0000 UTC m=+2099.573351571" observedRunningTime="2025-11-22 07:45:02.118911996 +0000 UTC m=+2100.959534632" watchObservedRunningTime="2025-11-22 07:45:02.131463286 +0000 UTC m=+2100.972085912" Nov 22 07:45:02 crc kubenswrapper[4853]: E1122 07:45:02.952987 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-7xksh" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.834118 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.860169 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-twqq5" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.949498 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data\") pod \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950173 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume\") pod \"16ff1621-679a-42ea-af86-4101058daa35\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950252 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ljd7\" (UniqueName: \"kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7\") pod \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950300 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data\") pod \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950468 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvsj\" (UniqueName: \"kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj\") pod \"16ff1621-679a-42ea-af86-4101058daa35\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950674 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle\") pod \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\" (UID: \"e2dc7c1e-0083-4eab-80f2-eec435f5c97a\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.950733 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume\") pod \"16ff1621-679a-42ea-af86-4101058daa35\" (UID: \"16ff1621-679a-42ea-af86-4101058daa35\") " Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.952887 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume" (OuterVolumeSpecName: "config-volume") pod "16ff1621-679a-42ea-af86-4101058daa35" (UID: "16ff1621-679a-42ea-af86-4101058daa35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.962828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj" (OuterVolumeSpecName: "kube-api-access-2nvsj") pod "16ff1621-679a-42ea-af86-4101058daa35" (UID: "16ff1621-679a-42ea-af86-4101058daa35"). InnerVolumeSpecName "kube-api-access-2nvsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.963034 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7" (OuterVolumeSpecName: "kube-api-access-5ljd7") pod "e2dc7c1e-0083-4eab-80f2-eec435f5c97a" (UID: "e2dc7c1e-0083-4eab-80f2-eec435f5c97a"). InnerVolumeSpecName "kube-api-access-5ljd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.982106 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16ff1621-679a-42ea-af86-4101058daa35" (UID: "16ff1621-679a-42ea-af86-4101058daa35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4853]: I1122 07:45:03.997059 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e2dc7c1e-0083-4eab-80f2-eec435f5c97a" (UID: "e2dc7c1e-0083-4eab-80f2-eec435f5c97a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.030891 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2dc7c1e-0083-4eab-80f2-eec435f5c97a" (UID: "e2dc7c1e-0083-4eab-80f2-eec435f5c97a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053669 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvsj\" (UniqueName: \"kubernetes.io/projected/16ff1621-679a-42ea-af86-4101058daa35-kube-api-access-2nvsj\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053712 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053727 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16ff1621-679a-42ea-af86-4101058daa35-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053736 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16ff1621-679a-42ea-af86-4101058daa35-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053764 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ljd7\" (UniqueName: \"kubernetes.io/projected/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-kube-api-access-5ljd7\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.053775 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.054484 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data" (OuterVolumeSpecName: "config-data") pod "e2dc7c1e-0083-4eab-80f2-eec435f5c97a" (UID: "e2dc7c1e-0083-4eab-80f2-eec435f5c97a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.095040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-twqq5" event={"ID":"e2dc7c1e-0083-4eab-80f2-eec435f5c97a","Type":"ContainerDied","Data":"699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074"} Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.095104 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="699b6e230cc9f91ecf6474e7ddccf3b50fbc510217b33e0ab3091b85f6358074" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.096309 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-twqq5" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.108826 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerStarted","Data":"331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0"} Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.125799 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" event={"ID":"16ff1621-679a-42ea-af86-4101058daa35","Type":"ContainerDied","Data":"d08bbe2ac8992e6fe8064438833dc32705e0f450ee9858abdfa8173b32ff75a5"} Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.125906 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08bbe2ac8992e6fe8064438833dc32705e0f450ee9858abdfa8173b32ff75a5" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.125824 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.150581 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gg4ws" podStartSLOduration=4.847615054 podStartE2EDuration="13.150558495s" podCreationTimestamp="2025-11-22 07:44:51 +0000 UTC" firstStartedPulling="2025-11-22 07:44:54.867043696 +0000 UTC m=+2093.707666322" lastFinishedPulling="2025-11-22 07:45:03.169987137 +0000 UTC m=+2102.010609763" observedRunningTime="2025-11-22 07:45:04.135054184 +0000 UTC m=+2102.975676800" watchObservedRunningTime="2025-11-22 07:45:04.150558495 +0000 UTC m=+2102.991181121" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.155867 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2dc7c1e-0083-4eab-80f2-eec435f5c97a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.646893 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:45:04 crc kubenswrapper[4853]: E1122 07:45:04.647973 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" containerName="glance-db-sync" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.647995 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" containerName="glance-db-sync" Nov 22 07:45:04 crc kubenswrapper[4853]: E1122 07:45:04.648030 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ff1621-679a-42ea-af86-4101058daa35" containerName="collect-profiles" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.648037 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ff1621-679a-42ea-af86-4101058daa35" containerName="collect-profiles" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.648288 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ff1621-679a-42ea-af86-4101058daa35" containerName="collect-profiles" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.648309 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" containerName="glance-db-sync" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.652987 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.676537 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.790719 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5cd9\" (UniqueName: \"kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.792218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.792449 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.792560 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.792633 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.792777 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.894783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.894872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.894896 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.894942 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.895066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5cd9\" (UniqueName: \"kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.895123 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.897464 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.897853 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.898208 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.898312 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.899053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.923391 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5cd9\" (UniqueName: \"kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9\") pod \"dnsmasq-dns-57c957c4ff-ndlll\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.936179 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm"] Nov 22 07:45:04 crc kubenswrapper[4853]: I1122 07:45:04.962191 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-xlrxm"] Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.007974 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.665731 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.668712 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.673499 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.673893 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.673986 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cxqr6" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.683058 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.722268 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:05 crc kubenswrapper[4853]: E1122 07:45:05.750058 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-dzsj4" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.781224 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adaf4de5-0b3c-4b48-a232-45157864a0f7" path="/var/lib/kubelet/pods/adaf4de5-0b3c-4b48-a232-45157864a0f7/volumes" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.830858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.831214 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6p6\" (UniqueName: \"kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.831570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.831641 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.831960 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.832020 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.832180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.934670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.934785 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.934852 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.934954 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.935055 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6p6\" (UniqueName: \"kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.935086 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.935110 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.935327 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.935464 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.936591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.944849 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.947810 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.958057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:05 crc kubenswrapper[4853]: I1122 07:45:05.968393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6p6\" (UniqueName: \"kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.009981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.132355 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.135486 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.138662 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.144695 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.244252 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.244359 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.244395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.244646 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.244722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6qp\" (UniqueName: \"kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.245310 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.245420 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.300190 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348635 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348738 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348823 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348856 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6qp\" (UniqueName: \"kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.348974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.349407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.349542 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.350117 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.362588 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.362914 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.363508 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.367423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6qp\" (UniqueName: \"kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.448152 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: I1122 07:45:06.465932 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:06 crc kubenswrapper[4853]: E1122 07:45:06.750465 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-qdbdm" podUID="f1598c90-266c-4607-b491-e9927d76469c" Nov 22 07:45:07 crc kubenswrapper[4853]: I1122 07:45:07.793392 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:07 crc kubenswrapper[4853]: I1122 07:45:07.865296 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:09 crc kubenswrapper[4853]: I1122 07:45:09.227895 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerStarted","Data":"3dfc382077f0b113c1004549e5cacf93a334aec9b4c6aef7c3cbea3441783711"} Nov 22 07:45:12 crc kubenswrapper[4853]: I1122 07:45:12.389869 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:12 crc kubenswrapper[4853]: I1122 07:45:12.390520 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:12 crc kubenswrapper[4853]: I1122 07:45:12.451565 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:13 crc kubenswrapper[4853]: I1122 07:45:13.343460 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:13 crc kubenswrapper[4853]: I1122 07:45:13.410575 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:45:13 crc kubenswrapper[4853]: I1122 07:45:13.585444 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:14 crc kubenswrapper[4853]: I1122 07:45:14.300384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerStarted","Data":"5930363fab858261e9343de73a8dcdebcbed1ceace949221ebf2f7d77e5fad99"} Nov 22 07:45:14 crc kubenswrapper[4853]: I1122 07:45:14.302283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerStarted","Data":"cb20dbbac851bb035c0d8d2e04e0757bd7fee553b0a8e2601d7a36aeb4735119"} Nov 22 07:45:14 crc kubenswrapper[4853]: I1122 07:45:14.360583 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:15 crc kubenswrapper[4853]: I1122 07:45:15.315679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerStarted","Data":"eb612c85e99071fd747714649fc26b413a6e75ffe2aeef868395af7b546277c7"} Nov 22 07:45:15 crc kubenswrapper[4853]: I1122 07:45:15.317584 4853 generic.go:334] "Generic (PLEG): container finished" podID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerID="cb20dbbac851bb035c0d8d2e04e0757bd7fee553b0a8e2601d7a36aeb4735119" exitCode=0 Nov 22 07:45:15 crc kubenswrapper[4853]: I1122 07:45:15.317683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerDied","Data":"cb20dbbac851bb035c0d8d2e04e0757bd7fee553b0a8e2601d7a36aeb4735119"} Nov 22 07:45:15 crc kubenswrapper[4853]: I1122 07:45:15.317927 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gg4ws" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="registry-server" containerID="cri-o://331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" gracePeriod=2 Nov 22 07:45:16 crc kubenswrapper[4853]: I1122 07:45:16.331718 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerStarted","Data":"dbb82607ddaf5ff06d27577c896c60ff974dc58cf42abd6d2c1306ecdc79bf41"} Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.365293 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerStarted","Data":"00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb"} Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.366344 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.368114 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerStarted","Data":"c5e86eb02c998371a71a3fa2e7c92faa07e714d3d5c9a72d4457b8a430bbef52"} Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.375128 4853 generic.go:334] "Generic (PLEG): container finished" podID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerID="331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" exitCode=0 Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.375184 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerDied","Data":"331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0"} Nov 22 07:45:18 crc kubenswrapper[4853]: I1122 07:45:18.394796 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" podStartSLOduration=14.394773534 podStartE2EDuration="14.394773534s" podCreationTimestamp="2025-11-22 07:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:18.389053389 +0000 UTC m=+2117.229676015" watchObservedRunningTime="2025-11-22 07:45:18.394773534 +0000 UTC m=+2117.235396170" Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.394209 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerStarted","Data":"ac52082768e71a2e8b3aa503a0cf09689e662f574b9e55e7aa9f8b6b4e607f34"} Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.402779 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerStarted","Data":"a6eac06240721c99cdfc5c0eab611fdc75c006c5f15b76c500872f133842f163"} Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.402918 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-log" containerID="cri-o://c5e86eb02c998371a71a3fa2e7c92faa07e714d3d5c9a72d4457b8a430bbef52" gracePeriod=30 Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.403002 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-httpd" containerID="cri-o://a6eac06240721c99cdfc5c0eab611fdc75c006c5f15b76c500872f133842f163" gracePeriod=30 Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.408230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerStarted","Data":"c658821a3c9c0715b300d7754ed4a7804038b4318a3291cd8ce214e191551b8e"} Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.408715 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-log" containerID="cri-o://dbb82607ddaf5ff06d27577c896c60ff974dc58cf42abd6d2c1306ecdc79bf41" gracePeriod=30 Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.408908 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-httpd" containerID="cri-o://c658821a3c9c0715b300d7754ed4a7804038b4318a3291cd8ce214e191551b8e" gracePeriod=30 Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.425503 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.425482023 podStartE2EDuration="15.425482023s" podCreationTimestamp="2025-11-22 07:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:19.424552987 +0000 UTC m=+2118.265175613" watchObservedRunningTime="2025-11-22 07:45:19.425482023 +0000 UTC m=+2118.266104639" Nov 22 07:45:19 crc kubenswrapper[4853]: I1122 07:45:19.458243 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.458221211 podStartE2EDuration="14.458221211s" podCreationTimestamp="2025-11-22 07:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:19.452160307 +0000 UTC m=+2118.292782933" watchObservedRunningTime="2025-11-22 07:45:19.458221211 +0000 UTC m=+2118.298843837" Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.425905 4853 generic.go:334] "Generic (PLEG): container finished" podID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerID="a6eac06240721c99cdfc5c0eab611fdc75c006c5f15b76c500872f133842f163" exitCode=143 Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.426275 4853 generic.go:334] "Generic (PLEG): container finished" podID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerID="c5e86eb02c998371a71a3fa2e7c92faa07e714d3d5c9a72d4457b8a430bbef52" exitCode=143 Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.426024 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerDied","Data":"a6eac06240721c99cdfc5c0eab611fdc75c006c5f15b76c500872f133842f163"} Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.426356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerDied","Data":"c5e86eb02c998371a71a3fa2e7c92faa07e714d3d5c9a72d4457b8a430bbef52"} Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.429049 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerID="c658821a3c9c0715b300d7754ed4a7804038b4318a3291cd8ce214e191551b8e" exitCode=143 Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.429073 4853 generic.go:334] "Generic (PLEG): container finished" podID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerID="dbb82607ddaf5ff06d27577c896c60ff974dc58cf42abd6d2c1306ecdc79bf41" exitCode=143 Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.429089 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerDied","Data":"c658821a3c9c0715b300d7754ed4a7804038b4318a3291cd8ce214e191551b8e"} Nov 22 07:45:20 crc kubenswrapper[4853]: I1122 07:45:20.429108 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerDied","Data":"dbb82607ddaf5ff06d27577c896c60ff974dc58cf42abd6d2c1306ecdc79bf41"} Nov 22 07:45:21 crc kubenswrapper[4853]: I1122 07:45:21.266518 4853 scope.go:117] "RemoveContainer" containerID="79907e986f7668a7d975a32ab11e2d321162948bb31ac8f00d8f8d88bb7dfb42" Nov 22 07:45:22 crc kubenswrapper[4853]: E1122 07:45:22.390790 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0 is running failed: container process not found" containerID="331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:45:22 crc kubenswrapper[4853]: E1122 07:45:22.391708 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0 is running failed: container process not found" containerID="331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:45:22 crc kubenswrapper[4853]: E1122 07:45:22.392299 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0 is running failed: container process not found" containerID="331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:45:22 crc kubenswrapper[4853]: E1122 07:45:22.392336 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gg4ws" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="registry-server" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.771725 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.862925 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content\") pod \"1dda077c-c7ab-4210-a564-3bd29b2bd762\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.863133 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities\") pod \"1dda077c-c7ab-4210-a564-3bd29b2bd762\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.863245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlw4c\" (UniqueName: \"kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c\") pod \"1dda077c-c7ab-4210-a564-3bd29b2bd762\" (UID: \"1dda077c-c7ab-4210-a564-3bd29b2bd762\") " Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.864512 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities" (OuterVolumeSpecName: "utilities") pod "1dda077c-c7ab-4210-a564-3bd29b2bd762" (UID: "1dda077c-c7ab-4210-a564-3bd29b2bd762"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.873170 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c" (OuterVolumeSpecName: "kube-api-access-rlw4c") pod "1dda077c-c7ab-4210-a564-3bd29b2bd762" (UID: "1dda077c-c7ab-4210-a564-3bd29b2bd762"). InnerVolumeSpecName "kube-api-access-rlw4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.879016 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dda077c-c7ab-4210-a564-3bd29b2bd762" (UID: "1dda077c-c7ab-4210-a564-3bd29b2bd762"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.967407 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.967458 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dda077c-c7ab-4210-a564-3bd29b2bd762-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:22 crc kubenswrapper[4853]: I1122 07:45:22.967480 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlw4c\" (UniqueName: \"kubernetes.io/projected/1dda077c-c7ab-4210-a564-3bd29b2bd762-kube-api-access-rlw4c\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.294101 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.349725 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.380010 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.380213 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.380303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n6p6\" (UniqueName: \"kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.380367 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.380965 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs" (OuterVolumeSpecName: "logs") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.385433 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.385575 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.385662 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run\") pod \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\" (UID: \"bbcbf7ce-4706-4c45-9047-387fc3c26c85\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.386826 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.391952 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.398940 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.399478 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6" (OuterVolumeSpecName: "kube-api-access-9n6p6") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "kube-api-access-9n6p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.415673 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts" (OuterVolumeSpecName: "scripts") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.473165 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.473214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gg4ws" event={"ID":"1dda077c-c7ab-4210-a564-3bd29b2bd762","Type":"ContainerDied","Data":"cb740e7ceeb67f9c5b78d3bd12ddfd6b07dc31e41935f9593f8ceddc32feeee8"} Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.473295 4853 scope.go:117] "RemoveContainer" containerID="331c956ddc91a31ffdf2a9a56d30237c23d55604526a86ef1d9fcee08262a2b0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.473372 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gg4ws" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.477133 4853 generic.go:334] "Generic (PLEG): container finished" podID="a62df165-8b5f-48a0-823f-91a3517b8082" containerID="555ddf966aa4207870cf3c77619e1ded7dc1792697c2a7dce08ae0fc0db92841" exitCode=0 Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.477305 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mnxvk" event={"ID":"a62df165-8b5f-48a0-823f-91a3517b8082","Type":"ContainerDied","Data":"555ddf966aa4207870cf3c77619e1ded7dc1792697c2a7dce08ae0fc0db92841"} Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.485334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ed74026d-8703-40d1-be6d-f146d8c0a5b0","Type":"ContainerDied","Data":"5930363fab858261e9343de73a8dcdebcbed1ceace949221ebf2f7d77e5fad99"} Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.485985 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487482 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487654 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487705 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487787 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487866 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj6qp\" (UniqueName: \"kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.487917 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data\") pod \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\" (UID: \"ed74026d-8703-40d1-be6d-f146d8c0a5b0\") " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.488681 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n6p6\" (UniqueName: \"kubernetes.io/projected/bbcbf7ce-4706-4c45-9047-387fc3c26c85-kube-api-access-9n6p6\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.488712 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.488739 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.488767 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.488780 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf7ce-4706-4c45-9047-387fc3c26c85-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.491377 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs" (OuterVolumeSpecName: "logs") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.492195 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.495642 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7xksh" event={"ID":"5a08a523-61a0-4155-b389-0491bcd97e84","Type":"ContainerStarted","Data":"a3861ced43ef558639f77a20e56162a89c124dd3bbfd3b4e531cc643f8fdcea1"} Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.507399 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbcbf7ce-4706-4c45-9047-387fc3c26c85","Type":"ContainerDied","Data":"eb612c85e99071fd747714649fc26b413a6e75ffe2aeef868395af7b546277c7"} Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.508155 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.509832 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts" (OuterVolumeSpecName: "scripts") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.517968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp" (OuterVolumeSpecName: "kube-api-access-jj6qp") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "kube-api-access-jj6qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.533385 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.534993 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.549162 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-7xksh" podStartSLOduration=3.122226623 podStartE2EDuration="2m10.549132928s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="2025-11-22 07:43:15.211947102 +0000 UTC m=+1994.052569728" lastFinishedPulling="2025-11-22 07:45:22.638853407 +0000 UTC m=+2121.479476033" observedRunningTime="2025-11-22 07:45:23.530007509 +0000 UTC m=+2122.370630135" watchObservedRunningTime="2025-11-22 07:45:23.549132928 +0000 UTC m=+2122.389755554" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.552778 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data" (OuterVolumeSpecName: "config-data") pod "bbcbf7ce-4706-4c45-9047-387fc3c26c85" (UID: "bbcbf7ce-4706-4c45-9047-387fc3c26c85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.553462 4853 scope.go:117] "RemoveContainer" containerID="44c96a68be0a5539837dc88db7c68d4a5488cb8e4ef600436f28d298ec98cc49" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.559425 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.591644 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.592155 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj6qp\" (UniqueName: \"kubernetes.io/projected/ed74026d-8703-40d1-be6d-f146d8c0a5b0-kube-api-access-jj6qp\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.592965 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.593058 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed74026d-8703-40d1-be6d-f146d8c0a5b0-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.593139 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf7ce-4706-4c45-9047-387fc3c26c85-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.593237 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.593305 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.593402 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.595115 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data" (OuterVolumeSpecName: "config-data") pod "ed74026d-8703-40d1-be6d-f146d8c0a5b0" (UID: "ed74026d-8703-40d1-be6d-f146d8c0a5b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.602473 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.621496 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.622936 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gg4ws"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.696826 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.696873 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed74026d-8703-40d1-be6d-f146d8c0a5b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.734925 4853 scope.go:117] "RemoveContainer" containerID="bf36efdcbfdac26b0cf0952e71b368c7a04b74dfbec40fef776112aae1a3ca46" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.766699 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" path="/var/lib/kubelet/pods/1dda077c-c7ab-4210-a564-3bd29b2bd762/volumes" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.783470 4853 scope.go:117] "RemoveContainer" containerID="c658821a3c9c0715b300d7754ed4a7804038b4318a3291cd8ce214e191551b8e" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.857930 4853 scope.go:117] "RemoveContainer" containerID="dbb82607ddaf5ff06d27577c896c60ff974dc58cf42abd6d2c1306ecdc79bf41" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.872803 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.894971 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.911031 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.925841 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.929384 4853 scope.go:117] "RemoveContainer" containerID="a6eac06240721c99cdfc5c0eab611fdc75c006c5f15b76c500872f133842f163" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.938914 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939701 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="registry-server" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939719 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="registry-server" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939737 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939743 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939787 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="extract-utilities" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939796 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="extract-utilities" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939818 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939825 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939834 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="extract-content" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939840 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="extract-content" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939881 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939888 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: E1122 07:45:23.939898 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.939903 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.940179 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.940206 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-log" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.940223 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dda077c-c7ab-4210-a564-3bd29b2bd762" containerName="registry-server" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.940235 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.940249 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" containerName="glance-httpd" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.941797 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.959537 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.968174 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-cxqr6" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.969130 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.969365 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.969137 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.976588 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.985336 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.994010 4853 scope.go:117] "RemoveContainer" containerID="c5e86eb02c998371a71a3fa2e7c92faa07e714d3d5c9a72d4457b8a430bbef52" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.994667 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 07:45:23 crc kubenswrapper[4853]: I1122 07:45:23.995183 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005206 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005309 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005382 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005432 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005524 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlcrr\" (UniqueName: \"kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005584 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.005652 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.020892 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.108546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109048 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109128 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109149 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109222 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109282 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.109945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110004 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlcrr\" (UniqueName: \"kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110024 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110102 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110215 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.110972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.111001 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgplx\" (UniqueName: \"kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.111120 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.111262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.116292 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.119106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.121451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.129870 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.135145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlcrr\" (UniqueName: \"kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.169000 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.212975 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213053 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213103 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213142 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgplx\" (UniqueName: \"kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213221 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213267 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213347 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.213800 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.220427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.221318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.229251 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.230524 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.230644 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.232070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.233391 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgplx\" (UniqueName: \"kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.263058 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.285512 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.337395 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.546798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdbdm" event={"ID":"f1598c90-266c-4607-b491-e9927d76469c","Type":"ContainerStarted","Data":"12fd72d7205251492e634a8695f8737b21dea1378b41aed6d23d4c5fedf9533c"} Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.590122 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qdbdm" podStartSLOduration=4.614879466 podStartE2EDuration="2m11.590102494s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="2025-11-22 07:43:15.913411744 +0000 UTC m=+1994.754034370" lastFinishedPulling="2025-11-22 07:45:22.888634772 +0000 UTC m=+2121.729257398" observedRunningTime="2025-11-22 07:45:24.585056698 +0000 UTC m=+2123.425679334" watchObservedRunningTime="2025-11-22 07:45:24.590102494 +0000 UTC m=+2123.430725120" Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.600997 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dzsj4" event={"ID":"289fadd4-7721-4d8e-b33e-35606c18eedb","Type":"ContainerStarted","Data":"b468857845241d2a97ac6d4a96ce7db29071c3d8dac09d53fe2f6aa71460f5dd"} Nov 22 07:45:24 crc kubenswrapper[4853]: I1122 07:45:24.660438 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-dzsj4" podStartSLOduration=4.265606019 podStartE2EDuration="2m11.660405631s" podCreationTimestamp="2025-11-22 07:43:13 +0000 UTC" firstStartedPulling="2025-11-22 07:43:15.493801039 +0000 UTC m=+1994.334423665" lastFinishedPulling="2025-11-22 07:45:22.888600651 +0000 UTC m=+2121.729223277" observedRunningTime="2025-11-22 07:45:24.630188862 +0000 UTC m=+2123.470811488" watchObservedRunningTime="2025-11-22 07:45:24.660405631 +0000 UTC m=+2123.501028257" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.012262 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.037532 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.106645 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.107109 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="dnsmasq-dns" containerID="cri-o://a1cb90a840df819706fee51e51ca74e6c012ba179d220834b26254da3d5cbfb8" gracePeriod=10 Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.259366 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.387287 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.479991 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.480100 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2hs\" (UniqueName: \"kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.480176 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.480207 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.480335 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.480490 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys\") pod \"a62df165-8b5f-48a0-823f-91a3517b8082\" (UID: \"a62df165-8b5f-48a0-823f-91a3517b8082\") " Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.493528 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs" (OuterVolumeSpecName: "kube-api-access-lv2hs") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "kube-api-access-lv2hs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.493656 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.498975 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts" (OuterVolumeSpecName: "scripts") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.499111 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.525071 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.527119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data" (OuterVolumeSpecName: "config-data") pod "a62df165-8b5f-48a0-823f-91a3517b8082" (UID: "a62df165-8b5f-48a0-823f-91a3517b8082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595580 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595677 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595690 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595703 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2hs\" (UniqueName: \"kubernetes.io/projected/a62df165-8b5f-48a0-823f-91a3517b8082-kube-api-access-lv2hs\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595719 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.595731 4853 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a62df165-8b5f-48a0-823f-91a3517b8082-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.838968 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcbf7ce-4706-4c45-9047-387fc3c26c85" path="/var/lib/kubelet/pods/bbcbf7ce-4706-4c45-9047-387fc3c26c85/volumes" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.839960 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed74026d-8703-40d1-be6d-f146d8c0a5b0" path="/var/lib/kubelet/pods/ed74026d-8703-40d1-be6d-f146d8c0a5b0/volumes" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.840942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerStarted","Data":"7c54ef80b82e20adc18b4a8f2a07debc2cef5f80a2023402f557fccb076bfa46"} Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.840971 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fb8dfc99b-xcccg"] Nov 22 07:45:25 crc kubenswrapper[4853]: E1122 07:45:25.841412 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62df165-8b5f-48a0-823f-91a3517b8082" containerName="keystone-bootstrap" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.841433 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62df165-8b5f-48a0-823f-91a3517b8082" containerName="keystone-bootstrap" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.841680 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62df165-8b5f-48a0-823f-91a3517b8082" containerName="keystone-bootstrap" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.854856 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.867503 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.867852 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.868472 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mnxvk" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.868658 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mnxvk" event={"ID":"a62df165-8b5f-48a0-823f-91a3517b8082","Type":"ContainerDied","Data":"89e57426848bfd2bee5ea88fd9ef7442cd143d1823d06002d4eea66b2deadce0"} Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.868775 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e57426848bfd2bee5ea88fd9ef7442cd143d1823d06002d4eea66b2deadce0" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.895580 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerStarted","Data":"dd0121dc8b18d87a5833b3fded55cad0bd3c3b1acad232e76320b5bec6181b21"} Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.913654 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fb8dfc99b-xcccg"] Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.934776 4853 generic.go:334] "Generic (PLEG): container finished" podID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerID="a1cb90a840df819706fee51e51ca74e6c012ba179d220834b26254da3d5cbfb8" exitCode=0 Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.934834 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" event={"ID":"213f3a9e-0f60-423e-90d6-cbb193eadff1","Type":"ContainerDied","Data":"a1cb90a840df819706fee51e51ca74e6c012ba179d220834b26254da3d5cbfb8"} Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.955639 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-scripts\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.955715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-credential-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-fernet-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-config-data\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf6sl\" (UniqueName: \"kubernetes.io/projected/15b318bb-8168-4613-8172-f352705a5de1-kube-api-access-jf6sl\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956687 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-public-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-combined-ca-bundle\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:25 crc kubenswrapper[4853]: I1122 07:45:25.956949 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-internal-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.058981 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf6sl\" (UniqueName: \"kubernetes.io/projected/15b318bb-8168-4613-8172-f352705a5de1-kube-api-access-jf6sl\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-public-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059095 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-combined-ca-bundle\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-internal-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059168 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-credential-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059186 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-scripts\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059312 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-fernet-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.059348 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-config-data\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.065336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-config-data\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.068032 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-combined-ca-bundle\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.069658 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-fernet-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.070323 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-scripts\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.072061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-credential-keys\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.087614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-public-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.088523 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15b318bb-8168-4613-8172-f352705a5de1-internal-tls-certs\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.092438 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf6sl\" (UniqueName: \"kubernetes.io/projected/15b318bb-8168-4613-8172-f352705a5de1-kube-api-access-jf6sl\") pod \"keystone-fb8dfc99b-xcccg\" (UID: \"15b318bb-8168-4613-8172-f352705a5de1\") " pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.218375 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.436245 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.474481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.474647 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcww4\" (UniqueName: \"kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.474716 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.475309 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.475341 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.475390 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb\") pod \"213f3a9e-0f60-423e-90d6-cbb193eadff1\" (UID: \"213f3a9e-0f60-423e-90d6-cbb193eadff1\") " Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.494308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4" (OuterVolumeSpecName: "kube-api-access-gcww4") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "kube-api-access-gcww4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.578271 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcww4\" (UniqueName: \"kubernetes.io/projected/213f3a9e-0f60-423e-90d6-cbb193eadff1-kube-api-access-gcww4\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.660895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.667394 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.672539 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config" (OuterVolumeSpecName: "config") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.678085 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fb8dfc99b-xcccg"] Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.678366 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.680254 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.680275 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.680285 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.680294 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: W1122 07:45:26.684137 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15b318bb_8168_4613_8172_f352705a5de1.slice/crio-7045d10a3779085f8a55ac77a5b8a3e1ebaff99d64875091cd7e4e31293f1587 WatchSource:0}: Error finding container 7045d10a3779085f8a55ac77a5b8a3e1ebaff99d64875091cd7e4e31293f1587: Status 404 returned error can't find the container with id 7045d10a3779085f8a55ac77a5b8a3e1ebaff99d64875091cd7e4e31293f1587 Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.732921 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "213f3a9e-0f60-423e-90d6-cbb193eadff1" (UID: "213f3a9e-0f60-423e-90d6-cbb193eadff1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.782175 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/213f3a9e-0f60-423e-90d6-cbb193eadff1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.949532 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerStarted","Data":"39be800d9e160d536435953354f6bb5e505e86c01d79fb3d3d39867b398ec4d2"} Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.951255 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fb8dfc99b-xcccg" event={"ID":"15b318bb-8168-4613-8172-f352705a5de1","Type":"ContainerStarted","Data":"7045d10a3779085f8a55ac77a5b8a3e1ebaff99d64875091cd7e4e31293f1587"} Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.956040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" event={"ID":"213f3a9e-0f60-423e-90d6-cbb193eadff1","Type":"ContainerDied","Data":"48d69383c9e7e419401b53fe50e14e24882e516e074cc8ed423654913651ebab"} Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.956140 4853 scope.go:117] "RemoveContainer" containerID="a1cb90a840df819706fee51e51ca74e6c012ba179d220834b26254da3d5cbfb8" Nov 22 07:45:26 crc kubenswrapper[4853]: I1122 07:45:26.956452 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-pjzkf" Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.012350 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.013997 4853 scope.go:117] "RemoveContainer" containerID="42f91e7e7d8f07179170490f1069159e786b089a778b0cc6aa690f1a7b731b91" Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.025726 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-pjzkf"] Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.771279 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" path="/var/lib/kubelet/pods/213f3a9e-0f60-423e-90d6-cbb193eadff1/volumes" Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.975708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fb8dfc99b-xcccg" event={"ID":"15b318bb-8168-4613-8172-f352705a5de1","Type":"ContainerStarted","Data":"5c3246dfa7d347dce6b69132736b1fdd02039c4eeeba9a5454f737c829da0e0e"} Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.980124 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerStarted","Data":"2fef05ea5e3d441fe9fb192e15b5ac4bfacf586bae220c5364d42b62e3be6f8f"} Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.980158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerStarted","Data":"739b31c91720f2ec0951dab78f0a956c3fd5e6b021ba0ebea0f5224904573651"} Nov 22 07:45:27 crc kubenswrapper[4853]: I1122 07:45:27.982086 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerStarted","Data":"367bdb90dd591ec6cd7977726078f8b9a8655aa6b87bba48685684fede46119f"} Nov 22 07:45:28 crc kubenswrapper[4853]: I1122 07:45:28.004493 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-fb8dfc99b-xcccg" podStartSLOduration=3.004467711 podStartE2EDuration="3.004467711s" podCreationTimestamp="2025-11-22 07:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:27.994038997 +0000 UTC m=+2126.834661633" watchObservedRunningTime="2025-11-22 07:45:28.004467711 +0000 UTC m=+2126.845090337" Nov 22 07:45:28 crc kubenswrapper[4853]: I1122 07:45:28.029862 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.029821208 podStartE2EDuration="5.029821208s" podCreationTimestamp="2025-11-22 07:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:28.016804756 +0000 UTC m=+2126.857427402" watchObservedRunningTime="2025-11-22 07:45:28.029821208 +0000 UTC m=+2126.870443834" Nov 22 07:45:28 crc kubenswrapper[4853]: I1122 07:45:28.059866 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.059834722 podStartE2EDuration="5.059834722s" podCreationTimestamp="2025-11-22 07:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:45:28.051378163 +0000 UTC m=+2126.892000789" watchObservedRunningTime="2025-11-22 07:45:28.059834722 +0000 UTC m=+2126.900457348" Nov 22 07:45:28 crc kubenswrapper[4853]: I1122 07:45:28.994732 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.285997 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.286561 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.338660 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.338709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.404005 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.404571 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.404721 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:45:34 crc kubenswrapper[4853]: I1122 07:45:34.404886 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:35 crc kubenswrapper[4853]: I1122 07:45:35.071901 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:35 crc kubenswrapper[4853]: I1122 07:45:35.071972 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:45:35 crc kubenswrapper[4853]: I1122 07:45:35.071986 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:35 crc kubenswrapper[4853]: I1122 07:45:35.071996 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:45:41 crc kubenswrapper[4853]: E1122 07:45:41.873503 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Nov 22 07:45:41 crc kubenswrapper[4853]: E1122 07:45:41.874815 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vl8zr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9f48ed7e-dbb8-4588-9cb7-4f0850757027): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:45:41 crc kubenswrapper[4853]: E1122 07:45:41.876584 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" Nov 22 07:45:42 crc kubenswrapper[4853]: I1122 07:45:42.158393 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="ceilometer-notification-agent" containerID="cri-o://fba47a045d54f7b183dc408933a4d29f1532d89a46eaae83dd7c2cdb604ffdf3" gracePeriod=30 Nov 22 07:45:42 crc kubenswrapper[4853]: I1122 07:45:42.158441 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="sg-core" containerID="cri-o://ac52082768e71a2e8b3aa503a0cf09689e662f574b9e55e7aa9f8b6b4e607f34" gracePeriod=30 Nov 22 07:45:43 crc kubenswrapper[4853]: I1122 07:45:43.173401 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerID="ac52082768e71a2e8b3aa503a0cf09689e662f574b9e55e7aa9f8b6b4e607f34" exitCode=2 Nov 22 07:45:43 crc kubenswrapper[4853]: I1122 07:45:43.173491 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerDied","Data":"ac52082768e71a2e8b3aa503a0cf09689e662f574b9e55e7aa9f8b6b4e607f34"} Nov 22 07:45:47 crc kubenswrapper[4853]: I1122 07:45:47.221976 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerID="fba47a045d54f7b183dc408933a4d29f1532d89a46eaae83dd7c2cdb604ffdf3" exitCode=0 Nov 22 07:45:47 crc kubenswrapper[4853]: I1122 07:45:47.222065 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerDied","Data":"fba47a045d54f7b183dc408933a4d29f1532d89a46eaae83dd7c2cdb604ffdf3"} Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.021530 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.100664 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.100739 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.100924 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.100950 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.100996 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.101239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.101286 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr\") pod \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\" (UID: \"9f48ed7e-dbb8-4588-9cb7-4f0850757027\") " Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.101535 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.101653 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.102102 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.102122 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9f48ed7e-dbb8-4588-9cb7-4f0850757027-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.108208 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts" (OuterVolumeSpecName: "scripts") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.117235 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr" (OuterVolumeSpecName: "kube-api-access-vl8zr") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "kube-api-access-vl8zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.145208 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.155016 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data" (OuterVolumeSpecName: "config-data") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.204795 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl8zr\" (UniqueName: \"kubernetes.io/projected/9f48ed7e-dbb8-4588-9cb7-4f0850757027-kube-api-access-vl8zr\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.204840 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.204851 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.204861 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.249977 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9f48ed7e-dbb8-4588-9cb7-4f0850757027","Type":"ContainerDied","Data":"60e08ffb22d6eab2ab75226408cb6488af7c0930422a785dfa02b1e19b6860a0"} Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.250074 4853 scope.go:117] "RemoveContainer" containerID="ac52082768e71a2e8b3aa503a0cf09689e662f574b9e55e7aa9f8b6b4e607f34" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.250185 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.344828 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9f48ed7e-dbb8-4588-9cb7-4f0850757027" (UID: "9f48ed7e-dbb8-4588-9cb7-4f0850757027"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.412255 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9f48ed7e-dbb8-4588-9cb7-4f0850757027-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.494383 4853 scope.go:117] "RemoveContainer" containerID="fba47a045d54f7b183dc408933a4d29f1532d89a46eaae83dd7c2cdb604ffdf3" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.626027 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.653148 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.667347 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:48 crc kubenswrapper[4853]: E1122 07:45:48.668108 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="sg-core" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668137 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="sg-core" Nov 22 07:45:48 crc kubenswrapper[4853]: E1122 07:45:48.668158 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="ceilometer-notification-agent" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668167 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="ceilometer-notification-agent" Nov 22 07:45:48 crc kubenswrapper[4853]: E1122 07:45:48.668203 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="init" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668212 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="init" Nov 22 07:45:48 crc kubenswrapper[4853]: E1122 07:45:48.668246 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="dnsmasq-dns" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668253 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="dnsmasq-dns" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668476 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="ceilometer-notification-agent" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668503 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" containerName="sg-core" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.668516 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="213f3a9e-0f60-423e-90d6-cbb193eadff1" containerName="dnsmasq-dns" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.670830 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.677210 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.677352 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.688444 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725391 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725592 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725692 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlh75\" (UniqueName: \"kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725708 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.725770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828214 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828474 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828620 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828721 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlh75\" (UniqueName: \"kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828795 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.828913 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.831626 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.831653 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.836700 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.837403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.841622 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.846897 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.857907 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlh75\" (UniqueName: \"kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75\") pod \"ceilometer-0\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " pod="openstack/ceilometer-0" Nov 22 07:45:48 crc kubenswrapper[4853]: I1122 07:45:48.996505 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.219917 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.220117 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.254606 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.577313 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.578199 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.776923 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f48ed7e-dbb8-4588-9cb7-4f0850757027" path="/var/lib/kubelet/pods/9f48ed7e-dbb8-4588-9cb7-4f0850757027/volumes" Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.812810 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:45:49 crc kubenswrapper[4853]: I1122 07:45:49.814822 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:45:50 crc kubenswrapper[4853]: I1122 07:45:50.307639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerStarted","Data":"28c4544f0e3fa0a25a069313d250de042394e86752de696f9c517bebef25d364"} Nov 22 07:45:51 crc kubenswrapper[4853]: I1122 07:45:51.321948 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerStarted","Data":"3729a3524a625f8fa705d3f68685bb4896e992c7de961de6594920469a6eeeb1"} Nov 22 07:45:54 crc kubenswrapper[4853]: I1122 07:45:54.360937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerStarted","Data":"79a2f3d7811a6a8536b8346f47094e116a287bfbe16db2a9eecd6d58c902c893"} Nov 22 07:45:57 crc kubenswrapper[4853]: I1122 07:45:57.398881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerStarted","Data":"93380c932415821aba0b0a700e4a87b32f5c0d08e60d4a5106dd097d5cf430ce"} Nov 22 07:46:00 crc kubenswrapper[4853]: I1122 07:46:00.435971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerStarted","Data":"85e37ed7e0206ce97496ea3d7d54785d358e8755a0ba01ff41c2f62d860943ec"} Nov 22 07:46:00 crc kubenswrapper[4853]: I1122 07:46:00.437194 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:46:00 crc kubenswrapper[4853]: I1122 07:46:00.493309 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.609031357 podStartE2EDuration="12.493269907s" podCreationTimestamp="2025-11-22 07:45:48 +0000 UTC" firstStartedPulling="2025-11-22 07:45:49.814442188 +0000 UTC m=+2148.655064814" lastFinishedPulling="2025-11-22 07:45:59.698680738 +0000 UTC m=+2158.539303364" observedRunningTime="2025-11-22 07:46:00.461094553 +0000 UTC m=+2159.301717179" watchObservedRunningTime="2025-11-22 07:46:00.493269907 +0000 UTC m=+2159.333892543" Nov 22 07:46:02 crc kubenswrapper[4853]: I1122 07:46:02.439535 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-fb8dfc99b-xcccg" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.716253 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.718327 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.726330 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.726370 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.726503 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-d9dgj" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.743521 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.850070 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.850549 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hhm5\" (UniqueName: \"kubernetes.io/projected/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-kube-api-access-5hhm5\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.850786 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.850823 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.953153 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hhm5\" (UniqueName: \"kubernetes.io/projected/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-kube-api-access-5hhm5\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.953272 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.953299 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.953367 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:03 crc kubenswrapper[4853]: I1122 07:46:03.955272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:04 crc kubenswrapper[4853]: I1122 07:46:04.028943 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:04 crc kubenswrapper[4853]: I1122 07:46:04.030907 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hhm5\" (UniqueName: \"kubernetes.io/projected/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-kube-api-access-5hhm5\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:04 crc kubenswrapper[4853]: I1122 07:46:04.031733 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fa95ca8f-6cef-4cbc-bd08-f693a09770dc-openstack-config-secret\") pod \"openstackclient\" (UID: \"fa95ca8f-6cef-4cbc-bd08-f693a09770dc\") " pod="openstack/openstackclient" Nov 22 07:46:04 crc kubenswrapper[4853]: I1122 07:46:04.047216 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:46:06 crc kubenswrapper[4853]: I1122 07:46:06.728031 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:46:06 crc kubenswrapper[4853]: W1122 07:46:06.735020 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa95ca8f_6cef_4cbc_bd08_f693a09770dc.slice/crio-290a4ddf4b96cde9e49313b2772236605ef9277e6f98b8f26ee30bcf241a7157 WatchSource:0}: Error finding container 290a4ddf4b96cde9e49313b2772236605ef9277e6f98b8f26ee30bcf241a7157: Status 404 returned error can't find the container with id 290a4ddf4b96cde9e49313b2772236605ef9277e6f98b8f26ee30bcf241a7157 Nov 22 07:46:07 crc kubenswrapper[4853]: I1122 07:46:07.517898 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa95ca8f-6cef-4cbc-bd08-f693a09770dc","Type":"ContainerStarted","Data":"290a4ddf4b96cde9e49313b2772236605ef9277e6f98b8f26ee30bcf241a7157"} Nov 22 07:46:19 crc kubenswrapper[4853]: I1122 07:46:19.010164 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:46:22 crc kubenswrapper[4853]: E1122 07:46:22.870173 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Nov 22 07:46:22 crc kubenswrapper[4853]: E1122 07:46:22.871477 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b8h546h99hffh555h5d6h5dfh64dh66bh5ddh7ch55dh58ch649hb5h4hd9h78h87h56hdch96h659hb4h658h6fh85h699h669h67ch679h55cq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hhm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(fa95ca8f-6cef-4cbc-bd08-f693a09770dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:46:22 crc kubenswrapper[4853]: E1122 07:46:22.872724 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="fa95ca8f-6cef-4cbc-bd08-f693a09770dc" Nov 22 07:46:23 crc kubenswrapper[4853]: I1122 07:46:23.062327 4853 scope.go:117] "RemoveContainer" containerID="3a950439bcaa64345b6de77d8957b914c37655297cd2e5c8f9d29e7dbc2896c4" Nov 22 07:46:23 crc kubenswrapper[4853]: E1122 07:46:23.776256 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="fa95ca8f-6cef-4cbc-bd08-f693a09770dc" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.661912 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.666279 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.677378 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.697160 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.697252 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtccr\" (UniqueName: \"kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.697303 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.799508 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtccr\" (UniqueName: \"kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.799609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.799842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.800506 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.802230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:28 crc kubenswrapper[4853]: I1122 07:46:28.823237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtccr\" (UniqueName: \"kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr\") pod \"redhat-operators-jrg7l\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:29 crc kubenswrapper[4853]: I1122 07:46:29.014576 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:29 crc kubenswrapper[4853]: I1122 07:46:29.548471 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:46:29 crc kubenswrapper[4853]: I1122 07:46:29.841292 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerStarted","Data":"2c6b5c9875c84d9109e1ee2978ec1631d8213ddda125e8e4cb06ff7d30c9490e"} Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.201638 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6f4b8c7cc5-lxts4"] Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.204721 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.216890 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.217043 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.217138 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.217930 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4b8c7cc5-lxts4"] Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247063 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-run-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247157 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svbn2\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-kube-api-access-svbn2\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-combined-ca-bundle\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247286 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-etc-swift\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-log-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247431 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-public-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247511 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-config-data\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.247586 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-internal-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.350677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-combined-ca-bundle\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351113 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-etc-swift\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351182 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-log-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351261 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-public-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-config-data\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-internal-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351417 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-run-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.351449 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svbn2\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-kube-api-access-svbn2\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.353169 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-run-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.353234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/759aa807-9e0a-4af1-bfec-8a04df8a8928-log-httpd\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.359916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-public-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.361695 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-internal-tls-certs\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.361871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-etc-swift\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.369982 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-config-data\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.372147 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svbn2\" (UniqueName: \"kubernetes.io/projected/759aa807-9e0a-4af1-bfec-8a04df8a8928-kube-api-access-svbn2\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.372474 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/759aa807-9e0a-4af1-bfec-8a04df8a8928-combined-ca-bundle\") pod \"swift-proxy-6f4b8c7cc5-lxts4\" (UID: \"759aa807-9e0a-4af1-bfec-8a04df8a8928\") " pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.530406 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.855683 4853 generic.go:334] "Generic (PLEG): container finished" podID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerID="6b281d359746839c9dfcb6e3569d9d47a062c21da2d72734e8a2aaf959ac099d" exitCode=0 Nov 22 07:46:30 crc kubenswrapper[4853]: I1122 07:46:30.855871 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerDied","Data":"6b281d359746839c9dfcb6e3569d9d47a062c21da2d72734e8a2aaf959ac099d"} Nov 22 07:46:31 crc kubenswrapper[4853]: I1122 07:46:31.189253 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6f4b8c7cc5-lxts4"] Nov 22 07:46:31 crc kubenswrapper[4853]: I1122 07:46:31.867976 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" event={"ID":"759aa807-9e0a-4af1-bfec-8a04df8a8928","Type":"ContainerStarted","Data":"e10cd662dd8473c6826d70a012f277eb340525d3b2d686167f94da5dfb035b99"} Nov 22 07:46:31 crc kubenswrapper[4853]: I1122 07:46:31.868368 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" event={"ID":"759aa807-9e0a-4af1-bfec-8a04df8a8928","Type":"ContainerStarted","Data":"ea1d72d719eb4b5d0884db46e8a77947bbe39cd753236f21d9d266ae290cff7c"} Nov 22 07:46:32 crc kubenswrapper[4853]: I1122 07:46:32.881711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" event={"ID":"759aa807-9e0a-4af1-bfec-8a04df8a8928","Type":"ContainerStarted","Data":"0066cc27f181d6d979ff940d4291990fab2158b60981adf6a3bd894a8804784e"} Nov 22 07:46:32 crc kubenswrapper[4853]: I1122 07:46:32.882399 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:32 crc kubenswrapper[4853]: I1122 07:46:32.882422 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:32 crc kubenswrapper[4853]: I1122 07:46:32.918335 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" podStartSLOduration=2.918309577 podStartE2EDuration="2.918309577s" podCreationTimestamp="2025-11-22 07:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:32.910336833 +0000 UTC m=+2191.750959469" watchObservedRunningTime="2025-11-22 07:46:32.918309577 +0000 UTC m=+2191.758932223" Nov 22 07:46:35 crc kubenswrapper[4853]: I1122 07:46:35.956073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerStarted","Data":"1bb6d8cab139483d13f3efc3cec60a8fc9944d4c66c0d76f6ec7a1f86a732ded"} Nov 22 07:46:40 crc kubenswrapper[4853]: I1122 07:46:40.536376 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:40 crc kubenswrapper[4853]: I1122 07:46:40.537143 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6f4b8c7cc5-lxts4" Nov 22 07:46:45 crc kubenswrapper[4853]: I1122 07:46:45.109358 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"fa95ca8f-6cef-4cbc-bd08-f693a09770dc","Type":"ContainerStarted","Data":"c8ef3f136e819353ed9fb7348299e82ca61bbd50ea8bf3cca9458a3c726be6ea"} Nov 22 07:46:45 crc kubenswrapper[4853]: I1122 07:46:45.130523 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.659948633 podStartE2EDuration="42.130493255s" podCreationTimestamp="2025-11-22 07:46:03 +0000 UTC" firstStartedPulling="2025-11-22 07:46:06.739828671 +0000 UTC m=+2165.580451297" lastFinishedPulling="2025-11-22 07:46:43.210373253 +0000 UTC m=+2202.050995919" observedRunningTime="2025-11-22 07:46:45.126448402 +0000 UTC m=+2203.967071018" watchObservedRunningTime="2025-11-22 07:46:45.130493255 +0000 UTC m=+2203.971115901" Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.161090 4853 generic.go:334] "Generic (PLEG): container finished" podID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerID="1bb6d8cab139483d13f3efc3cec60a8fc9944d4c66c0d76f6ec7a1f86a732ded" exitCode=0 Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.161191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerDied","Data":"1bb6d8cab139483d13f3efc3cec60a8fc9944d4c66c0d76f6ec7a1f86a732ded"} Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.437238 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.437584 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-central-agent" containerID="cri-o://3729a3524a625f8fa705d3f68685bb4896e992c7de961de6594920469a6eeeb1" gracePeriod=30 Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.437814 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-notification-agent" containerID="cri-o://79a2f3d7811a6a8536b8346f47094e116a287bfbe16db2a9eecd6d58c902c893" gracePeriod=30 Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.437912 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="proxy-httpd" containerID="cri-o://85e37ed7e0206ce97496ea3d7d54785d358e8755a0ba01ff41c2f62d860943ec" gracePeriod=30 Nov 22 07:46:47 crc kubenswrapper[4853]: I1122 07:46:47.437820 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="sg-core" containerID="cri-o://93380c932415821aba0b0a700e4a87b32f5c0d08e60d4a5106dd097d5cf430ce" gracePeriod=30 Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.175576 4853 generic.go:334] "Generic (PLEG): container finished" podID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerID="85e37ed7e0206ce97496ea3d7d54785d358e8755a0ba01ff41c2f62d860943ec" exitCode=0 Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.175952 4853 generic.go:334] "Generic (PLEG): container finished" podID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerID="93380c932415821aba0b0a700e4a87b32f5c0d08e60d4a5106dd097d5cf430ce" exitCode=2 Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.175961 4853 generic.go:334] "Generic (PLEG): container finished" podID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerID="3729a3524a625f8fa705d3f68685bb4896e992c7de961de6594920469a6eeeb1" exitCode=0 Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.175986 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerDied","Data":"85e37ed7e0206ce97496ea3d7d54785d358e8755a0ba01ff41c2f62d860943ec"} Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.176018 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerDied","Data":"93380c932415821aba0b0a700e4a87b32f5c0d08e60d4a5106dd097d5cf430ce"} Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.176030 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerDied","Data":"3729a3524a625f8fa705d3f68685bb4896e992c7de961de6594920469a6eeeb1"} Nov 22 07:46:48 crc kubenswrapper[4853]: I1122 07:46:48.999349 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.193:3000/\": dial tcp 10.217.0.193:3000: connect: connection refused" Nov 22 07:46:49 crc kubenswrapper[4853]: I1122 07:46:49.191758 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerStarted","Data":"cd99fdd8969c4aa6e86e4f722006c0f8c023e9d036f841eb0504cc1f110f1df0"} Nov 22 07:46:49 crc kubenswrapper[4853]: I1122 07:46:49.218078 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrg7l" podStartSLOduration=3.568754541 podStartE2EDuration="21.218054487s" podCreationTimestamp="2025-11-22 07:46:28 +0000 UTC" firstStartedPulling="2025-11-22 07:46:30.858669066 +0000 UTC m=+2189.699291692" lastFinishedPulling="2025-11-22 07:46:48.507969012 +0000 UTC m=+2207.348591638" observedRunningTime="2025-11-22 07:46:49.214822037 +0000 UTC m=+2208.055444663" watchObservedRunningTime="2025-11-22 07:46:49.218054487 +0000 UTC m=+2208.058677113" Nov 22 07:46:50 crc kubenswrapper[4853]: I1122 07:46:50.205813 4853 generic.go:334] "Generic (PLEG): container finished" podID="f1598c90-266c-4607-b491-e9927d76469c" containerID="12fd72d7205251492e634a8695f8737b21dea1378b41aed6d23d4c5fedf9533c" exitCode=0 Nov 22 07:46:50 crc kubenswrapper[4853]: I1122 07:46:50.205900 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdbdm" event={"ID":"f1598c90-266c-4607-b491-e9927d76469c","Type":"ContainerDied","Data":"12fd72d7205251492e634a8695f8737b21dea1378b41aed6d23d4c5fedf9533c"} Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.660743 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdbdm" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.839230 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts\") pod \"f1598c90-266c-4607-b491-e9927d76469c\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.839418 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data\") pod \"f1598c90-266c-4607-b491-e9927d76469c\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.839510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle\") pod \"f1598c90-266c-4607-b491-e9927d76469c\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.839536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs\") pod \"f1598c90-266c-4607-b491-e9927d76469c\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.839587 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj\") pod \"f1598c90-266c-4607-b491-e9927d76469c\" (UID: \"f1598c90-266c-4607-b491-e9927d76469c\") " Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.840045 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs" (OuterVolumeSpecName: "logs") pod "f1598c90-266c-4607-b491-e9927d76469c" (UID: "f1598c90-266c-4607-b491-e9927d76469c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.842049 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1598c90-266c-4607-b491-e9927d76469c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.848900 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts" (OuterVolumeSpecName: "scripts") pod "f1598c90-266c-4607-b491-e9927d76469c" (UID: "f1598c90-266c-4607-b491-e9927d76469c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.851318 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj" (OuterVolumeSpecName: "kube-api-access-p8nvj") pod "f1598c90-266c-4607-b491-e9927d76469c" (UID: "f1598c90-266c-4607-b491-e9927d76469c"). InnerVolumeSpecName "kube-api-access-p8nvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.886192 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data" (OuterVolumeSpecName: "config-data") pod "f1598c90-266c-4607-b491-e9927d76469c" (UID: "f1598c90-266c-4607-b491-e9927d76469c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.909267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1598c90-266c-4607-b491-e9927d76469c" (UID: "f1598c90-266c-4607-b491-e9927d76469c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.945563 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.945912 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.946020 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1598c90-266c-4607-b491-e9927d76469c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:51 crc kubenswrapper[4853]: I1122 07:46:51.946164 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8nvj\" (UniqueName: \"kubernetes.io/projected/f1598c90-266c-4607-b491-e9927d76469c-kube-api-access-p8nvj\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.246089 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdbdm" event={"ID":"f1598c90-266c-4607-b491-e9927d76469c","Type":"ContainerDied","Data":"267f7013f4c79ac236265a8dfca7972a2e51eb59f89c6b638d25656a9f3236d4"} Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.246597 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267f7013f4c79ac236265a8dfca7972a2e51eb59f89c6b638d25656a9f3236d4" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.246742 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdbdm" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.373186 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8478cc79fb-ggl8b"] Nov 22 07:46:52 crc kubenswrapper[4853]: E1122 07:46:52.373893 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1598c90-266c-4607-b491-e9927d76469c" containerName="placement-db-sync" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.373915 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1598c90-266c-4607-b491-e9927d76469c" containerName="placement-db-sync" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.374210 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1598c90-266c-4607-b491-e9927d76469c" containerName="placement-db-sync" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.379041 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.383415 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.383641 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.383858 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2tc6t" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.384061 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.387183 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.393253 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8478cc79fb-ggl8b"] Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.562976 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-combined-ca-bundle\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.563092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-scripts\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.563472 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbkd\" (UniqueName: \"kubernetes.io/projected/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-kube-api-access-fhbkd\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.563926 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-logs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.564138 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-config-data\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.564407 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-public-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.564444 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-internal-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.667053 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-public-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.667119 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-internal-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.667160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-combined-ca-bundle\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.667260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-scripts\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.667999 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbkd\" (UniqueName: \"kubernetes.io/projected/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-kube-api-access-fhbkd\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.668102 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-logs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.668181 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-config-data\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.669055 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-logs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.672554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-scripts\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.674685 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-config-data\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.675015 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-public-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.675341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-combined-ca-bundle\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.683414 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-internal-tls-certs\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.692135 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbkd\" (UniqueName: \"kubernetes.io/projected/a9d809e7-9dbc-4c65-96e3-f8d025e97dc4-kube-api-access-fhbkd\") pod \"placement-8478cc79fb-ggl8b\" (UID: \"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4\") " pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.726409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.913853 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-sn986"] Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.923005 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:52 crc kubenswrapper[4853]: I1122 07:46:52.957276 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sn986"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.039287 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-fsblj"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.041872 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.049435 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fsblj"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.063250 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-46aa-account-create-vvxzs"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.065669 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.073546 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.073832 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-46aa-account-create-vvxzs"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.086259 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.086481 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmvs\" (UniqueName: \"kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.149982 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-s6dqt"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.154353 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.209339 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klp5p\" (UniqueName: \"kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.209563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.210045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.210233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.210521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f99l\" (UniqueName: \"kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.211072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmvs\" (UniqueName: \"kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.212130 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.219282 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6dqt"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.257079 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmvs\" (UniqueName: \"kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs\") pod \"nova-api-db-create-sn986\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.270300 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sn986" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.308712 4853 generic.go:334] "Generic (PLEG): container finished" podID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerID="79a2f3d7811a6a8536b8346f47094e116a287bfbe16db2a9eecd6d58c902c893" exitCode=0 Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.311903 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerDied","Data":"79a2f3d7811a6a8536b8346f47094e116a287bfbe16db2a9eecd6d58c902c893"} Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klp5p\" (UniqueName: \"kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325331 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325409 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325484 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmkrw\" (UniqueName: \"kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.325556 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f99l\" (UniqueName: \"kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.347367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.381313 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klp5p\" (UniqueName: \"kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p\") pod \"nova-api-46aa-account-create-vvxzs\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.384257 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.398957 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f99l\" (UniqueName: \"kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l\") pod \"nova-cell0-db-create-fsblj\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.433606 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.433702 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmkrw\" (UniqueName: \"kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.445440 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.448375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.469444 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.472023 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmkrw\" (UniqueName: \"kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw\") pod \"nova-cell1-db-create-s6dqt\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.501332 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.514966 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0ebe-account-create-9lcbc"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.516861 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.526489 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.552428 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.552996 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76zk\" (UniqueName: \"kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.554009 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ebe-account-create-9lcbc"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.569486 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4ca2-account-create-twtvw"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.572653 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.577693 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.594827 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.602579 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ca2-account-create-twtvw"] Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.674864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.674945 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675168 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675372 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675485 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlh75\" (UniqueName: \"kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675525 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle\") pod \"adb9d004-7149-44b2-8f2b-ee6da0680491\" (UID: \"adb9d004-7149-44b2-8f2b-ee6da0680491\") " Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675902 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.675942 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.676003 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ssv\" (UniqueName: \"kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.676119 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76zk\" (UniqueName: \"kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.683670 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.686206 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.702145 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.714956 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76zk\" (UniqueName: \"kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk\") pod \"nova-cell0-0ebe-account-create-9lcbc\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.740120 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75" (OuterVolumeSpecName: "kube-api-access-tlh75") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "kube-api-access-tlh75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.758332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts" (OuterVolumeSpecName: "scripts") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.780729 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.780838 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ssv\" (UniqueName: \"kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.781015 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.781028 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlh75\" (UniqueName: \"kubernetes.io/projected/adb9d004-7149-44b2-8f2b-ee6da0680491-kube-api-access-tlh75\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.781039 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/adb9d004-7149-44b2-8f2b-ee6da0680491-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.781047 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.792868 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.832495 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ssv\" (UniqueName: \"kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv\") pod \"nova-cell1-4ca2-account-create-twtvw\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.890833 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.923240 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:53 crc kubenswrapper[4853]: I1122 07:46:53.932239 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.005855 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8478cc79fb-ggl8b"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.038674 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.194229 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.266492 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.301659 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data" (OuterVolumeSpecName: "config-data") pod "adb9d004-7149-44b2-8f2b-ee6da0680491" (UID: "adb9d004-7149-44b2-8f2b-ee6da0680491"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.389443 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"adb9d004-7149-44b2-8f2b-ee6da0680491","Type":"ContainerDied","Data":"28c4544f0e3fa0a25a069313d250de042394e86752de696f9c517bebef25d364"} Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.389510 4853 scope.go:117] "RemoveContainer" containerID="85e37ed7e0206ce97496ea3d7d54785d358e8755a0ba01ff41c2f62d860943ec" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.389538 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.394142 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb9d004-7149-44b2-8f2b-ee6da0680491-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.418473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8478cc79fb-ggl8b" event={"ID":"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4","Type":"ContainerStarted","Data":"c9567311dc210c2133d108a374239c2f665bed3ec3fc5a8074ddc1f2ae09f5e7"} Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.473128 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.504431 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.511515 4853 scope.go:117] "RemoveContainer" containerID="93380c932415821aba0b0a700e4a87b32f5c0d08e60d4a5106dd097d5cf430ce" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.516043 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-sn986"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.540074 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:54 crc kubenswrapper[4853]: E1122 07:46:54.540684 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-notification-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.540726 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-notification-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: E1122 07:46:54.540760 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="sg-core" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.540768 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="sg-core" Nov 22 07:46:54 crc kubenswrapper[4853]: E1122 07:46:54.540820 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="proxy-httpd" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.540826 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="proxy-httpd" Nov 22 07:46:54 crc kubenswrapper[4853]: E1122 07:46:54.540835 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-central-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.540841 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-central-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.541126 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-notification-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.541156 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="ceilometer-central-agent" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.541169 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="sg-core" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.541184 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" containerName="proxy-httpd" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.543517 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.550571 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.551077 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.563642 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.591397 4853 scope.go:117] "RemoveContainer" containerID="79a2f3d7811a6a8536b8346f47094e116a287bfbe16db2a9eecd6d58c902c893" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.609961 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610173 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610225 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610338 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610628 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cv5w\" (UniqueName: \"kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.610917 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.638714 4853 scope.go:117] "RemoveContainer" containerID="3729a3524a625f8fa705d3f68685bb4896e992c7de961de6594920469a6eeeb1" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.718649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.718886 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.719028 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.719061 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.719170 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.719333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.719380 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cv5w\" (UniqueName: \"kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.722367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.723120 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.735061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.735439 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.750958 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.756971 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.781803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cv5w\" (UniqueName: \"kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w\") pod \"ceilometer-0\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " pod="openstack/ceilometer-0" Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.805828 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-fsblj"] Nov 22 07:46:54 crc kubenswrapper[4853]: W1122 07:46:54.818282 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a7997d1_57a6_4a25_a55c_e56a641573e3.slice/crio-b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7 WatchSource:0}: Error finding container b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7: Status 404 returned error can't find the container with id b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7 Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.837805 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-46aa-account-create-vvxzs"] Nov 22 07:46:54 crc kubenswrapper[4853]: I1122 07:46:54.955648 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.408332 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0ebe-account-create-9lcbc"] Nov 22 07:46:55 crc kubenswrapper[4853]: W1122 07:46:55.445687 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0fa86b0_ac83_432d_884c_c906c2b47a12.slice/crio-b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033 WatchSource:0}: Error finding container b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033: Status 404 returned error can't find the container with id b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033 Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.462204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46aa-account-create-vvxzs" event={"ID":"267c0415-28fa-43de-a7e0-c64254b85fee","Type":"ContainerStarted","Data":"c5b36fa6f15a112a76e94c86c8342bbbd27860888fb54186837d4e9374bc42ca"} Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.465231 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fsblj" event={"ID":"1a7997d1-57a6-4a25-a55c-e56a641573e3","Type":"ContainerStarted","Data":"b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7"} Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.471295 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sn986" event={"ID":"10eb3c0c-487e-4c7c-b422-fc41587f2b3e","Type":"ContainerStarted","Data":"6cff1f5936d0fea29172df00a815aa34ef080ab9363f370e6ca160c4f901945f"} Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.532792 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4ca2-account-create-twtvw"] Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.560088 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-s6dqt"] Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.726933 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:46:55 crc kubenswrapper[4853]: I1122 07:46:55.779492 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb9d004-7149-44b2-8f2b-ee6da0680491" path="/var/lib/kubelet/pods/adb9d004-7149-44b2-8f2b-ee6da0680491/volumes" Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.485073 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" event={"ID":"74817a4c-27ab-46b7-8ec8-5663379dc5f8","Type":"ContainerStarted","Data":"b9be95a7708f8d0980296a0ab791c5dbfff5e3999bc3d58292b999c49ac4cbe7"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.485656 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" event={"ID":"74817a4c-27ab-46b7-8ec8-5663379dc5f8","Type":"ContainerStarted","Data":"2fca57a40aaf4f54c013c38c2f793c9f67e60418acb726d0d63796e745c21488"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.490006 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerStarted","Data":"a1e8bdb61b5da9f51134cff76e1fb45935b9ec67db4db16f6892883dc7a84ac2"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.492326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ca2-account-create-twtvw" event={"ID":"e0fa86b0-ac83-432d-884c-c906c2b47a12","Type":"ContainerStarted","Data":"b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.495381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46aa-account-create-vvxzs" event={"ID":"267c0415-28fa-43de-a7e0-c64254b85fee","Type":"ContainerStarted","Data":"855251b0bb1e555239ae2e2a9f138ea4ee1ac41896cd63eea37e083431d83617"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.498110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6dqt" event={"ID":"b2beefde-5354-4376-8cf2-5f3bd9cde859","Type":"ContainerStarted","Data":"3400beebc62e88f0fd2fdb3332e84f7592594b0fd8d192336346aadab17ad13e"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.500827 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fsblj" event={"ID":"1a7997d1-57a6-4a25-a55c-e56a641573e3","Type":"ContainerStarted","Data":"3ea1c8cba2063368c899e188ed94d06adf47ec3a97c4c306cfc1629fe0b5bf25"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.503363 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sn986" event={"ID":"10eb3c0c-487e-4c7c-b422-fc41587f2b3e","Type":"ContainerStarted","Data":"80eb8ff1e48c44f0475a6f267423257298028dc682231762a19f30d7b3f88196"} Nov 22 07:46:56 crc kubenswrapper[4853]: I1122 07:46:56.506338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8478cc79fb-ggl8b" event={"ID":"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4","Type":"ContainerStarted","Data":"ba2eb4a8c81b42cf67ae75c8b29021feee10eaa9d32690c4ee2e00c1428d7cde"} Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.525098 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6dqt" event={"ID":"b2beefde-5354-4376-8cf2-5f3bd9cde859","Type":"ContainerStarted","Data":"b3825da8513fcf522abc07c633c3157bee2e3fc873e526dee20a07d81b83340e"} Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.528947 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ca2-account-create-twtvw" event={"ID":"e0fa86b0-ac83-432d-884c-c906c2b47a12","Type":"ContainerStarted","Data":"98fbd7ad3838c1218d32364d752993489d8cd741011e0960648e5f8b5cac6738"} Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.550121 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-s6dqt" podStartSLOduration=4.55009228 podStartE2EDuration="4.55009228s" podCreationTimestamp="2025-11-22 07:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:57.548221307 +0000 UTC m=+2216.388843933" watchObservedRunningTime="2025-11-22 07:46:57.55009228 +0000 UTC m=+2216.390714906" Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.573778 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-sn986" podStartSLOduration=5.573736074 podStartE2EDuration="5.573736074s" podCreationTimestamp="2025-11-22 07:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:57.572024876 +0000 UTC m=+2216.412647502" watchObservedRunningTime="2025-11-22 07:46:57.573736074 +0000 UTC m=+2216.414358700" Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.598875 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-fsblj" podStartSLOduration=5.598850919 podStartE2EDuration="5.598850919s" podCreationTimestamp="2025-11-22 07:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:57.589139636 +0000 UTC m=+2216.429762262" watchObservedRunningTime="2025-11-22 07:46:57.598850919 +0000 UTC m=+2216.439473545" Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.622096 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-46aa-account-create-vvxzs" podStartSLOduration=5.6220715519999995 podStartE2EDuration="5.622071552s" podCreationTimestamp="2025-11-22 07:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:57.610533757 +0000 UTC m=+2216.451156383" watchObservedRunningTime="2025-11-22 07:46:57.622071552 +0000 UTC m=+2216.462694178" Nov 22 07:46:57 crc kubenswrapper[4853]: I1122 07:46:57.638521 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" podStartSLOduration=4.638498143 podStartE2EDuration="4.638498143s" podCreationTimestamp="2025-11-22 07:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:57.632233257 +0000 UTC m=+2216.472855893" watchObservedRunningTime="2025-11-22 07:46:57.638498143 +0000 UTC m=+2216.479120759" Nov 22 07:46:58 crc kubenswrapper[4853]: I1122 07:46:58.563565 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8478cc79fb-ggl8b" event={"ID":"a9d809e7-9dbc-4c65-96e3-f8d025e97dc4","Type":"ContainerStarted","Data":"2b392b91a44eb189f84acb64d88a3fa2a61a9dd16ff3abc92d955e74402c8f1d"} Nov 22 07:46:58 crc kubenswrapper[4853]: I1122 07:46:58.564674 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:58 crc kubenswrapper[4853]: I1122 07:46:58.564728 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:46:58 crc kubenswrapper[4853]: I1122 07:46:58.584135 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4ca2-account-create-twtvw" podStartSLOduration=5.584098114 podStartE2EDuration="5.584098114s" podCreationTimestamp="2025-11-22 07:46:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:58.582262741 +0000 UTC m=+2217.422885387" watchObservedRunningTime="2025-11-22 07:46:58.584098114 +0000 UTC m=+2217.424720740" Nov 22 07:46:58 crc kubenswrapper[4853]: I1122 07:46:58.605805 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8478cc79fb-ggl8b" podStartSLOduration=6.605740411 podStartE2EDuration="6.605740411s" podCreationTimestamp="2025-11-22 07:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:46:58.601960725 +0000 UTC m=+2217.442583371" watchObservedRunningTime="2025-11-22 07:46:58.605740411 +0000 UTC m=+2217.446363047" Nov 22 07:46:59 crc kubenswrapper[4853]: I1122 07:46:59.015648 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:59 crc kubenswrapper[4853]: I1122 07:46:59.015951 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:59 crc kubenswrapper[4853]: I1122 07:46:59.068428 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:59 crc kubenswrapper[4853]: I1122 07:46:59.638839 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:46:59 crc kubenswrapper[4853]: I1122 07:46:59.867297 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:47:00 crc kubenswrapper[4853]: I1122 07:47:00.591180 4853 generic.go:334] "Generic (PLEG): container finished" podID="e0fa86b0-ac83-432d-884c-c906c2b47a12" containerID="98fbd7ad3838c1218d32364d752993489d8cd741011e0960648e5f8b5cac6738" exitCode=0 Nov 22 07:47:00 crc kubenswrapper[4853]: I1122 07:47:00.591231 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ca2-account-create-twtvw" event={"ID":"e0fa86b0-ac83-432d-884c-c906c2b47a12","Type":"ContainerDied","Data":"98fbd7ad3838c1218d32364d752993489d8cd741011e0960648e5f8b5cac6738"} Nov 22 07:47:00 crc kubenswrapper[4853]: I1122 07:47:00.596825 4853 generic.go:334] "Generic (PLEG): container finished" podID="b2beefde-5354-4376-8cf2-5f3bd9cde859" containerID="b3825da8513fcf522abc07c633c3157bee2e3fc873e526dee20a07d81b83340e" exitCode=0 Nov 22 07:47:00 crc kubenswrapper[4853]: I1122 07:47:00.596950 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6dqt" event={"ID":"b2beefde-5354-4376-8cf2-5f3bd9cde859","Type":"ContainerDied","Data":"b3825da8513fcf522abc07c633c3157bee2e3fc873e526dee20a07d81b83340e"} Nov 22 07:47:01 crc kubenswrapper[4853]: I1122 07:47:01.297895 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:47:01 crc kubenswrapper[4853]: I1122 07:47:01.298019 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:47:01 crc kubenswrapper[4853]: I1122 07:47:01.611489 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerStarted","Data":"5d7bf3d5c94c3b021db2cb4502857185135736b3ad4af1f53484e510b845cc99"} Nov 22 07:47:01 crc kubenswrapper[4853]: I1122 07:47:01.612071 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrg7l" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="registry-server" containerID="cri-o://cd99fdd8969c4aa6e86e4f722006c0f8c023e9d036f841eb0504cc1f110f1df0" gracePeriod=2 Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.259061 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.266003 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.382645 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts\") pod \"b2beefde-5354-4376-8cf2-5f3bd9cde859\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.382701 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ssv\" (UniqueName: \"kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv\") pod \"e0fa86b0-ac83-432d-884c-c906c2b47a12\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.382982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts\") pod \"e0fa86b0-ac83-432d-884c-c906c2b47a12\" (UID: \"e0fa86b0-ac83-432d-884c-c906c2b47a12\") " Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.383153 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmkrw\" (UniqueName: \"kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw\") pod \"b2beefde-5354-4376-8cf2-5f3bd9cde859\" (UID: \"b2beefde-5354-4376-8cf2-5f3bd9cde859\") " Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.383513 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2beefde-5354-4376-8cf2-5f3bd9cde859" (UID: "b2beefde-5354-4376-8cf2-5f3bd9cde859"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.384486 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2beefde-5354-4376-8cf2-5f3bd9cde859-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.387150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0fa86b0-ac83-432d-884c-c906c2b47a12" (UID: "e0fa86b0-ac83-432d-884c-c906c2b47a12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.391836 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw" (OuterVolumeSpecName: "kube-api-access-zmkrw") pod "b2beefde-5354-4376-8cf2-5f3bd9cde859" (UID: "b2beefde-5354-4376-8cf2-5f3bd9cde859"). InnerVolumeSpecName "kube-api-access-zmkrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.394914 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv" (OuterVolumeSpecName: "kube-api-access-t5ssv") pod "e0fa86b0-ac83-432d-884c-c906c2b47a12" (UID: "e0fa86b0-ac83-432d-884c-c906c2b47a12"). InnerVolumeSpecName "kube-api-access-t5ssv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.487081 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmkrw\" (UniqueName: \"kubernetes.io/projected/b2beefde-5354-4376-8cf2-5f3bd9cde859-kube-api-access-zmkrw\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.487154 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ssv\" (UniqueName: \"kubernetes.io/projected/e0fa86b0-ac83-432d-884c-c906c2b47a12-kube-api-access-t5ssv\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.487169 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0fa86b0-ac83-432d-884c-c906c2b47a12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.638105 4853 generic.go:334] "Generic (PLEG): container finished" podID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerID="cd99fdd8969c4aa6e86e4f722006c0f8c023e9d036f841eb0504cc1f110f1df0" exitCode=0 Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.638205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerDied","Data":"cd99fdd8969c4aa6e86e4f722006c0f8c023e9d036f841eb0504cc1f110f1df0"} Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.641085 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4ca2-account-create-twtvw" event={"ID":"e0fa86b0-ac83-432d-884c-c906c2b47a12","Type":"ContainerDied","Data":"b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033"} Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.641141 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43950464004dd0da37d15f163169d4c797fe987638b5e5321d3fdb05e348033" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.641206 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4ca2-account-create-twtvw" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.643492 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-s6dqt" event={"ID":"b2beefde-5354-4376-8cf2-5f3bd9cde859","Type":"ContainerDied","Data":"3400beebc62e88f0fd2fdb3332e84f7592594b0fd8d192336346aadab17ad13e"} Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.643538 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3400beebc62e88f0fd2fdb3332e84f7592594b0fd8d192336346aadab17ad13e" Nov 22 07:47:02 crc kubenswrapper[4853]: I1122 07:47:02.643581 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-s6dqt" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.328314 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.411408 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtccr\" (UniqueName: \"kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr\") pod \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.411624 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities\") pod \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.411855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content\") pod \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\" (UID: \"ff697a20-c1a6-486d-8e8e-a03902c30e6b\") " Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.414811 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities" (OuterVolumeSpecName: "utilities") pod "ff697a20-c1a6-486d-8e8e-a03902c30e6b" (UID: "ff697a20-c1a6-486d-8e8e-a03902c30e6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.425275 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr" (OuterVolumeSpecName: "kube-api-access-rtccr") pod "ff697a20-c1a6-486d-8e8e-a03902c30e6b" (UID: "ff697a20-c1a6-486d-8e8e-a03902c30e6b"). InnerVolumeSpecName "kube-api-access-rtccr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.428766 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtccr\" (UniqueName: \"kubernetes.io/projected/ff697a20-c1a6-486d-8e8e-a03902c30e6b-kube-api-access-rtccr\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.428892 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.561377 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff697a20-c1a6-486d-8e8e-a03902c30e6b" (UID: "ff697a20-c1a6-486d-8e8e-a03902c30e6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.633512 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff697a20-c1a6-486d-8e8e-a03902c30e6b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.658939 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrg7l" event={"ID":"ff697a20-c1a6-486d-8e8e-a03902c30e6b","Type":"ContainerDied","Data":"2c6b5c9875c84d9109e1ee2978ec1631d8213ddda125e8e4cb06ff7d30c9490e"} Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.659035 4853 scope.go:117] "RemoveContainer" containerID="cd99fdd8969c4aa6e86e4f722006c0f8c023e9d036f841eb0504cc1f110f1df0" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.659077 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrg7l" Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.716381 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.726048 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrg7l"] Nov 22 07:47:03 crc kubenswrapper[4853]: I1122 07:47:03.772651 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" path="/var/lib/kubelet/pods/ff697a20-c1a6-486d-8e8e-a03902c30e6b/volumes" Nov 22 07:47:04 crc kubenswrapper[4853]: I1122 07:47:04.383403 4853 scope.go:117] "RemoveContainer" containerID="1bb6d8cab139483d13f3efc3cec60a8fc9944d4c66c0d76f6ec7a1f86a732ded" Nov 22 07:47:04 crc kubenswrapper[4853]: I1122 07:47:04.438964 4853 scope.go:117] "RemoveContainer" containerID="6b281d359746839c9dfcb6e3569d9d47a062c21da2d72734e8a2aaf959ac099d" Nov 22 07:47:05 crc kubenswrapper[4853]: I1122 07:47:05.688205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerStarted","Data":"883f301a3c89f8cdc661ac418a0316aecb20c3b253b3b1e5421be1f8735085c2"} Nov 22 07:47:10 crc kubenswrapper[4853]: E1122 07:47:10.206768 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10eb3c0c_487e_4c7c_b422_fc41587f2b3e.slice/crio-conmon-80eb8ff1e48c44f0475a6f267423257298028dc682231762a19f30d7b3f88196.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267c0415_28fa_43de_a7e0_c64254b85fee.slice/crio-855251b0bb1e555239ae2e2a9f138ea4ee1ac41896cd63eea37e083431d83617.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.753628 4853 generic.go:334] "Generic (PLEG): container finished" podID="267c0415-28fa-43de-a7e0-c64254b85fee" containerID="855251b0bb1e555239ae2e2a9f138ea4ee1ac41896cd63eea37e083431d83617" exitCode=0 Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.754148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46aa-account-create-vvxzs" event={"ID":"267c0415-28fa-43de-a7e0-c64254b85fee","Type":"ContainerDied","Data":"855251b0bb1e555239ae2e2a9f138ea4ee1ac41896cd63eea37e083431d83617"} Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.761207 4853 generic.go:334] "Generic (PLEG): container finished" podID="1a7997d1-57a6-4a25-a55c-e56a641573e3" containerID="3ea1c8cba2063368c899e188ed94d06adf47ec3a97c4c306cfc1629fe0b5bf25" exitCode=0 Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.761303 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fsblj" event={"ID":"1a7997d1-57a6-4a25-a55c-e56a641573e3","Type":"ContainerDied","Data":"3ea1c8cba2063368c899e188ed94d06adf47ec3a97c4c306cfc1629fe0b5bf25"} Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.766876 4853 generic.go:334] "Generic (PLEG): container finished" podID="10eb3c0c-487e-4c7c-b422-fc41587f2b3e" containerID="80eb8ff1e48c44f0475a6f267423257298028dc682231762a19f30d7b3f88196" exitCode=0 Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.766959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sn986" event={"ID":"10eb3c0c-487e-4c7c-b422-fc41587f2b3e","Type":"ContainerDied","Data":"80eb8ff1e48c44f0475a6f267423257298028dc682231762a19f30d7b3f88196"} Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.769294 4853 generic.go:334] "Generic (PLEG): container finished" podID="74817a4c-27ab-46b7-8ec8-5663379dc5f8" containerID="b9be95a7708f8d0980296a0ab791c5dbfff5e3999bc3d58292b999c49ac4cbe7" exitCode=0 Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.769345 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" event={"ID":"74817a4c-27ab-46b7-8ec8-5663379dc5f8","Type":"ContainerDied","Data":"b9be95a7708f8d0980296a0ab791c5dbfff5e3999bc3d58292b999c49ac4cbe7"} Nov 22 07:47:10 crc kubenswrapper[4853]: I1122 07:47:10.772417 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerStarted","Data":"f45338c375aac7bd47ae8f10587a84cfee4101df7966d61a5db72d4da96f6179"} Nov 22 07:47:11 crc kubenswrapper[4853]: I1122 07:47:11.982244 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.403870 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.419874 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.434901 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sn986" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.480021 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.504947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts\") pod \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.505152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts\") pod \"267c0415-28fa-43de-a7e0-c64254b85fee\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.505191 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts\") pod \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.505219 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czmvs\" (UniqueName: \"kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs\") pod \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\" (UID: \"10eb3c0c-487e-4c7c-b422-fc41587f2b3e\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.505366 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klp5p\" (UniqueName: \"kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p\") pod \"267c0415-28fa-43de-a7e0-c64254b85fee\" (UID: \"267c0415-28fa-43de-a7e0-c64254b85fee\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.505460 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76zk\" (UniqueName: \"kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk\") pod \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\" (UID: \"74817a4c-27ab-46b7-8ec8-5663379dc5f8\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.509210 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74817a4c-27ab-46b7-8ec8-5663379dc5f8" (UID: "74817a4c-27ab-46b7-8ec8-5663379dc5f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.509263 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "267c0415-28fa-43de-a7e0-c64254b85fee" (UID: "267c0415-28fa-43de-a7e0-c64254b85fee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.513181 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10eb3c0c-487e-4c7c-b422-fc41587f2b3e" (UID: "10eb3c0c-487e-4c7c-b422-fc41587f2b3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.513598 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs" (OuterVolumeSpecName: "kube-api-access-czmvs") pod "10eb3c0c-487e-4c7c-b422-fc41587f2b3e" (UID: "10eb3c0c-487e-4c7c-b422-fc41587f2b3e"). InnerVolumeSpecName "kube-api-access-czmvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.513654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk" (OuterVolumeSpecName: "kube-api-access-v76zk") pod "74817a4c-27ab-46b7-8ec8-5663379dc5f8" (UID: "74817a4c-27ab-46b7-8ec8-5663379dc5f8"). InnerVolumeSpecName "kube-api-access-v76zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.515093 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p" (OuterVolumeSpecName: "kube-api-access-klp5p") pod "267c0415-28fa-43de-a7e0-c64254b85fee" (UID: "267c0415-28fa-43de-a7e0-c64254b85fee"). InnerVolumeSpecName "kube-api-access-klp5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.608354 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts\") pod \"1a7997d1-57a6-4a25-a55c-e56a641573e3\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.608633 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f99l\" (UniqueName: \"kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l\") pod \"1a7997d1-57a6-4a25-a55c-e56a641573e3\" (UID: \"1a7997d1-57a6-4a25-a55c-e56a641573e3\") " Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.609141 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a7997d1-57a6-4a25-a55c-e56a641573e3" (UID: "1a7997d1-57a6-4a25-a55c-e56a641573e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.609965 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76zk\" (UniqueName: \"kubernetes.io/projected/74817a4c-27ab-46b7-8ec8-5663379dc5f8-kube-api-access-v76zk\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.609997 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74817a4c-27ab-46b7-8ec8-5663379dc5f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.610011 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a7997d1-57a6-4a25-a55c-e56a641573e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.610023 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/267c0415-28fa-43de-a7e0-c64254b85fee-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.610035 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.610045 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czmvs\" (UniqueName: \"kubernetes.io/projected/10eb3c0c-487e-4c7c-b422-fc41587f2b3e-kube-api-access-czmvs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.610056 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klp5p\" (UniqueName: \"kubernetes.io/projected/267c0415-28fa-43de-a7e0-c64254b85fee-kube-api-access-klp5p\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.612813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l" (OuterVolumeSpecName: "kube-api-access-6f99l") pod "1a7997d1-57a6-4a25-a55c-e56a641573e3" (UID: "1a7997d1-57a6-4a25-a55c-e56a641573e3"). InnerVolumeSpecName "kube-api-access-6f99l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.712184 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f99l\" (UniqueName: \"kubernetes.io/projected/1a7997d1-57a6-4a25-a55c-e56a641573e3-kube-api-access-6f99l\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.810149 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-fsblj" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.810340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-fsblj" event={"ID":"1a7997d1-57a6-4a25-a55c-e56a641573e3","Type":"ContainerDied","Data":"b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7"} Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.810441 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b449d0f07138796ac72f8a862919acaf247098fac4ee0ec1d7fc756d3428baf7" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.812898 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-sn986" event={"ID":"10eb3c0c-487e-4c7c-b422-fc41587f2b3e","Type":"ContainerDied","Data":"6cff1f5936d0fea29172df00a815aa34ef080ab9363f370e6ca160c4f901945f"} Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.812982 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cff1f5936d0fea29172df00a815aa34ef080ab9363f370e6ca160c4f901945f" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.813347 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-sn986" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.815708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" event={"ID":"74817a4c-27ab-46b7-8ec8-5663379dc5f8","Type":"ContainerDied","Data":"2fca57a40aaf4f54c013c38c2f793c9f67e60418acb726d0d63796e745c21488"} Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.815799 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fca57a40aaf4f54c013c38c2f793c9f67e60418acb726d0d63796e745c21488" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.816003 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0ebe-account-create-9lcbc" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.821670 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerStarted","Data":"af40682eaa1c1718e621306f5ca644fcab9c0e7fe6cace56875144a26f984a09"} Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.822330 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.825076 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-46aa-account-create-vvxzs" event={"ID":"267c0415-28fa-43de-a7e0-c64254b85fee","Type":"ContainerDied","Data":"c5b36fa6f15a112a76e94c86c8342bbbd27860888fb54186837d4e9374bc42ca"} Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.825116 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b36fa6f15a112a76e94c86c8342bbbd27860888fb54186837d4e9374bc42ca" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.825197 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-46aa-account-create-vvxzs" Nov 22 07:47:12 crc kubenswrapper[4853]: I1122 07:47:12.920497 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.656930284 podStartE2EDuration="18.920467057s" podCreationTimestamp="2025-11-22 07:46:54 +0000 UTC" firstStartedPulling="2025-11-22 07:46:55.77817005 +0000 UTC m=+2214.618792676" lastFinishedPulling="2025-11-22 07:47:12.041706823 +0000 UTC m=+2230.882329449" observedRunningTime="2025-11-22 07:47:12.857783321 +0000 UTC m=+2231.698405947" watchObservedRunningTime="2025-11-22 07:47:12.920467057 +0000 UTC m=+2231.761089683" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.657937 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c5tjs"] Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658489 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10eb3c0c-487e-4c7c-b422-fc41587f2b3e" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658509 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="10eb3c0c-487e-4c7c-b422-fc41587f2b3e" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658521 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267c0415-28fa-43de-a7e0-c64254b85fee" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658529 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="267c0415-28fa-43de-a7e0-c64254b85fee" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658549 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0fa86b0-ac83-432d-884c-c906c2b47a12" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658557 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0fa86b0-ac83-432d-884c-c906c2b47a12" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658578 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7997d1-57a6-4a25-a55c-e56a641573e3" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658584 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7997d1-57a6-4a25-a55c-e56a641573e3" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658609 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="extract-content" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658618 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="extract-content" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658630 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="extract-utilities" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658636 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="extract-utilities" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658657 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="registry-server" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658662 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="registry-server" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658684 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74817a4c-27ab-46b7-8ec8-5663379dc5f8" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658690 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="74817a4c-27ab-46b7-8ec8-5663379dc5f8" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: E1122 07:47:13.658702 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2beefde-5354-4376-8cf2-5f3bd9cde859" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658709 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2beefde-5354-4376-8cf2-5f3bd9cde859" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658936 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="74817a4c-27ab-46b7-8ec8-5663379dc5f8" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658950 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff697a20-c1a6-486d-8e8e-a03902c30e6b" containerName="registry-server" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658969 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0fa86b0-ac83-432d-884c-c906c2b47a12" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658985 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2beefde-5354-4376-8cf2-5f3bd9cde859" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.658991 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="267c0415-28fa-43de-a7e0-c64254b85fee" containerName="mariadb-account-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.659003 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="10eb3c0c-487e-4c7c-b422-fc41587f2b3e" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.659012 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7997d1-57a6-4a25-a55c-e56a641573e3" containerName="mariadb-database-create" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.659959 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.662219 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.662576 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.669269 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c5tjs"] Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.680059 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qtghl" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.741244 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.742149 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrbd\" (UniqueName: \"kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.742279 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.742560 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.802466 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8478cc79fb-ggl8b" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.844668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.844739 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.845059 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrbd\" (UniqueName: \"kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.845094 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.855483 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.865800 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.868383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.878820 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrbd\" (UniqueName: \"kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd\") pod \"nova-cell0-conductor-db-sync-c5tjs\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:13 crc kubenswrapper[4853]: I1122 07:47:13.990276 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:47:14 crc kubenswrapper[4853]: I1122 07:47:14.759197 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c5tjs"] Nov 22 07:47:14 crc kubenswrapper[4853]: W1122 07:47:14.802559 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bb7e8f_c36e_4027_b953_384bff85680b.slice/crio-c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5 WatchSource:0}: Error finding container c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5: Status 404 returned error can't find the container with id c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5 Nov 22 07:47:14 crc kubenswrapper[4853]: I1122 07:47:14.853513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" event={"ID":"c7bb7e8f-c36e-4027-b953-384bff85680b","Type":"ContainerStarted","Data":"c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5"} Nov 22 07:47:24 crc kubenswrapper[4853]: I1122 07:47:24.962989 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:47:31 crc kubenswrapper[4853]: I1122 07:47:31.297490 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:47:31 crc kubenswrapper[4853]: I1122 07:47:31.298209 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:47:34 crc kubenswrapper[4853]: I1122 07:47:34.626265 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:34 crc kubenswrapper[4853]: I1122 07:47:34.627164 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" containerName="kube-state-metrics" containerID="cri-o://67599a7a5981d6d4054a2c3fb6d72a75ee4653bef9ac1b3f2df7845a30f145ae" gracePeriod=30 Nov 22 07:47:34 crc kubenswrapper[4853]: I1122 07:47:34.771415 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:34 crc kubenswrapper[4853]: I1122 07:47:34.772307 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="41a12382-0497-4150-b1bb-002d4df97f20" containerName="mysqld-exporter" containerID="cri-o://1ddf0290074287948ab8425e368fb8d4158583b7821046c8f62b935c219ccd0f" gracePeriod=30 Nov 22 07:47:36 crc kubenswrapper[4853]: I1122 07:47:36.151532 4853 generic.go:334] "Generic (PLEG): container finished" podID="41a12382-0497-4150-b1bb-002d4df97f20" containerID="1ddf0290074287948ab8425e368fb8d4158583b7821046c8f62b935c219ccd0f" exitCode=2 Nov 22 07:47:36 crc kubenswrapper[4853]: I1122 07:47:36.151618 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"41a12382-0497-4150-b1bb-002d4df97f20","Type":"ContainerDied","Data":"1ddf0290074287948ab8425e368fb8d4158583b7821046c8f62b935c219ccd0f"} Nov 22 07:47:36 crc kubenswrapper[4853]: I1122 07:47:36.155544 4853 generic.go:334] "Generic (PLEG): container finished" podID="573160d1-5593-42ee-906a-44b4fbc5abe4" containerID="67599a7a5981d6d4054a2c3fb6d72a75ee4653bef9ac1b3f2df7845a30f145ae" exitCode=2 Nov 22 07:47:36 crc kubenswrapper[4853]: I1122 07:47:36.155602 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"573160d1-5593-42ee-906a-44b4fbc5abe4","Type":"ContainerDied","Data":"67599a7a5981d6d4054a2c3fb6d72a75ee4653bef9ac1b3f2df7845a30f145ae"} Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.184234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"573160d1-5593-42ee-906a-44b4fbc5abe4","Type":"ContainerDied","Data":"3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7"} Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.185087 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f71e9075bc0138e28238a9ddb1f2c7ce635150ee8887685056c0c5440b33ef7" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.299652 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.305496 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.335872 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle\") pod \"41a12382-0497-4150-b1bb-002d4df97f20\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.336104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j5jx\" (UniqueName: \"kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx\") pod \"41a12382-0497-4150-b1bb-002d4df97f20\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.336275 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data\") pod \"41a12382-0497-4150-b1bb-002d4df97f20\" (UID: \"41a12382-0497-4150-b1bb-002d4df97f20\") " Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.336419 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmlcz\" (UniqueName: \"kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz\") pod \"573160d1-5593-42ee-906a-44b4fbc5abe4\" (UID: \"573160d1-5593-42ee-906a-44b4fbc5abe4\") " Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.344420 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx" (OuterVolumeSpecName: "kube-api-access-6j5jx") pod "41a12382-0497-4150-b1bb-002d4df97f20" (UID: "41a12382-0497-4150-b1bb-002d4df97f20"). InnerVolumeSpecName "kube-api-access-6j5jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.352320 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz" (OuterVolumeSpecName: "kube-api-access-lmlcz") pod "573160d1-5593-42ee-906a-44b4fbc5abe4" (UID: "573160d1-5593-42ee-906a-44b4fbc5abe4"). InnerVolumeSpecName "kube-api-access-lmlcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.381047 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a12382-0497-4150-b1bb-002d4df97f20" (UID: "41a12382-0497-4150-b1bb-002d4df97f20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.427308 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data" (OuterVolumeSpecName: "config-data") pod "41a12382-0497-4150-b1bb-002d4df97f20" (UID: "41a12382-0497-4150-b1bb-002d4df97f20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.439401 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.439446 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmlcz\" (UniqueName: \"kubernetes.io/projected/573160d1-5593-42ee-906a-44b4fbc5abe4-kube-api-access-lmlcz\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.439459 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a12382-0497-4150-b1bb-002d4df97f20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:38 crc kubenswrapper[4853]: I1122 07:47:38.439468 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j5jx\" (UniqueName: \"kubernetes.io/projected/41a12382-0497-4150-b1bb-002d4df97f20-kube-api-access-6j5jx\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.199249 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.199248 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"41a12382-0497-4150-b1bb-002d4df97f20","Type":"ContainerDied","Data":"9e0bcdf8fc60bdaa6cc9ba4824aac4a7eb554c43ba44e4608533d1f2a446cb71"} Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.199315 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.199937 4853 scope.go:117] "RemoveContainer" containerID="1ddf0290074287948ab8425e368fb8d4158583b7821046c8f62b935c219ccd0f" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.242671 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.268483 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.283998 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.307339 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.314154 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: E1122 07:47:39.314858 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" containerName="kube-state-metrics" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.314874 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" containerName="kube-state-metrics" Nov 22 07:47:39 crc kubenswrapper[4853]: E1122 07:47:39.314903 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a12382-0497-4150-b1bb-002d4df97f20" containerName="mysqld-exporter" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.314911 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a12382-0497-4150-b1bb-002d4df97f20" containerName="mysqld-exporter" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.315134 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a12382-0497-4150-b1bb-002d4df97f20" containerName="mysqld-exporter" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.315152 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" containerName="kube-state-metrics" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.316182 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.319147 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.319580 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.327822 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.350519 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.353083 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.356377 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.365220 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.368320 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzvh\" (UniqueName: \"kubernetes.io/projected/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-api-access-dvzvh\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.368422 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.368481 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.368521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.376224 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.471876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-config-data\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.471934 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.471970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4qx\" (UniqueName: \"kubernetes.io/projected/b06a0baf-5cef-4893-b81c-55aa5930bdf0-kube-api-access-lc4qx\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.472094 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.472191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvzvh\" (UniqueName: \"kubernetes.io/projected/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-api-access-dvzvh\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.472286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.472355 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.472400 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.477985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.478347 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.479189 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.494694 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvzvh\" (UniqueName: \"kubernetes.io/projected/00c18e6e-23ef-45c1-b7ce-5efb6d47f001-kube-api-access-dvzvh\") pod \"kube-state-metrics-0\" (UID: \"00c18e6e-23ef-45c1-b7ce-5efb6d47f001\") " pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.575187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-config-data\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.575253 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.575291 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc4qx\" (UniqueName: \"kubernetes.io/projected/b06a0baf-5cef-4893-b81c-55aa5930bdf0-kube-api-access-lc4qx\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.575327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.593143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.593234 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-config-data\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.593726 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06a0baf-5cef-4893-b81c-55aa5930bdf0-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.596819 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc4qx\" (UniqueName: \"kubernetes.io/projected/b06a0baf-5cef-4893-b81c-55aa5930bdf0-kube-api-access-lc4qx\") pod \"mysqld-exporter-0\" (UID: \"b06a0baf-5cef-4893-b81c-55aa5930bdf0\") " pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.657263 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.678943 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.776807 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a12382-0497-4150-b1bb-002d4df97f20" path="/var/lib/kubelet/pods/41a12382-0497-4150-b1bb-002d4df97f20/volumes" Nov 22 07:47:39 crc kubenswrapper[4853]: I1122 07:47:39.777930 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573160d1-5593-42ee-906a-44b4fbc5abe4" path="/var/lib/kubelet/pods/573160d1-5593-42ee-906a-44b4fbc5abe4/volumes" Nov 22 07:47:39 crc kubenswrapper[4853]: E1122 07:47:39.890286 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Nov 22 07:47:39 crc kubenswrapper[4853]: E1122 07:47:39.890880 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tzrbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-c5tjs_openstack(c7bb7e8f-c36e-4027-b953-384bff85680b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:47:39 crc kubenswrapper[4853]: E1122 07:47:39.892502 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" Nov 22 07:47:40 crc kubenswrapper[4853]: E1122 07:47:40.221843 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" Nov 22 07:47:40 crc kubenswrapper[4853]: I1122 07:47:40.296817 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:47:40 crc kubenswrapper[4853]: I1122 07:47:40.447787 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 22 07:47:41 crc kubenswrapper[4853]: I1122 07:47:41.238568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b06a0baf-5cef-4893-b81c-55aa5930bdf0","Type":"ContainerStarted","Data":"a0276a9eb1fdb3470feeb9e8830adb1db1c7a9a0d224935065aa7c138eb8551a"} Nov 22 07:47:41 crc kubenswrapper[4853]: I1122 07:47:41.240616 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"00c18e6e-23ef-45c1-b7ce-5efb6d47f001","Type":"ContainerStarted","Data":"ca5caa9c574cfa67cffc77e8cf771086f0d74779a10bedc472bbdcd744fcd9b9"} Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.256004 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.256363 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-central-agent" containerID="cri-o://5d7bf3d5c94c3b021db2cb4502857185135736b3ad4af1f53484e510b845cc99" gracePeriod=30 Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.256527 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="sg-core" containerID="cri-o://f45338c375aac7bd47ae8f10587a84cfee4101df7966d61a5db72d4da96f6179" gracePeriod=30 Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.256579 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-notification-agent" containerID="cri-o://883f301a3c89f8cdc661ac418a0316aecb20c3b253b3b1e5421be1f8735085c2" gracePeriod=30 Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.256618 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="proxy-httpd" containerID="cri-o://af40682eaa1c1718e621306f5ca644fcab9c0e7fe6cace56875144a26f984a09" gracePeriod=30 Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.335106 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"00c18e6e-23ef-45c1-b7ce-5efb6d47f001","Type":"ContainerStarted","Data":"f9587b0bd89d56cf62796c2bd1d579b9d2fe0a8690c90ce226db6e4f105a6b83"} Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.335705 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:47:45 crc kubenswrapper[4853]: I1122 07:47:45.367825 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.40929356 podStartE2EDuration="6.367799008s" podCreationTimestamp="2025-11-22 07:47:39 +0000 UTC" firstStartedPulling="2025-11-22 07:47:40.303002052 +0000 UTC m=+2259.143624678" lastFinishedPulling="2025-11-22 07:47:44.2615075 +0000 UTC m=+2263.102130126" observedRunningTime="2025-11-22 07:47:45.355159877 +0000 UTC m=+2264.195782503" watchObservedRunningTime="2025-11-22 07:47:45.367799008 +0000 UTC m=+2264.208421634" Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.353544 4853 generic.go:334] "Generic (PLEG): container finished" podID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerID="af40682eaa1c1718e621306f5ca644fcab9c0e7fe6cace56875144a26f984a09" exitCode=0 Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.353951 4853 generic.go:334] "Generic (PLEG): container finished" podID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerID="f45338c375aac7bd47ae8f10587a84cfee4101df7966d61a5db72d4da96f6179" exitCode=2 Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.353964 4853 generic.go:334] "Generic (PLEG): container finished" podID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerID="5d7bf3d5c94c3b021db2cb4502857185135736b3ad4af1f53484e510b845cc99" exitCode=0 Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.354239 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerDied","Data":"af40682eaa1c1718e621306f5ca644fcab9c0e7fe6cace56875144a26f984a09"} Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.354297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerDied","Data":"f45338c375aac7bd47ae8f10587a84cfee4101df7966d61a5db72d4da96f6179"} Nov 22 07:47:46 crc kubenswrapper[4853]: I1122 07:47:46.354308 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerDied","Data":"5d7bf3d5c94c3b021db2cb4502857185135736b3ad4af1f53484e510b845cc99"} Nov 22 07:47:47 crc kubenswrapper[4853]: I1122 07:47:47.373007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"b06a0baf-5cef-4893-b81c-55aa5930bdf0","Type":"ContainerStarted","Data":"3bd689bf373d95ea0a800b1ba014ab8b8e2c9e1dfff65ff054e64e9f34ae2496"} Nov 22 07:47:47 crc kubenswrapper[4853]: I1122 07:47:47.400274 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.342499604 podStartE2EDuration="8.400250596s" podCreationTimestamp="2025-11-22 07:47:39 +0000 UTC" firstStartedPulling="2025-11-22 07:47:40.487043881 +0000 UTC m=+2259.327666507" lastFinishedPulling="2025-11-22 07:47:46.544794873 +0000 UTC m=+2265.385417499" observedRunningTime="2025-11-22 07:47:47.394275895 +0000 UTC m=+2266.234898541" watchObservedRunningTime="2025-11-22 07:47:47.400250596 +0000 UTC m=+2266.240873222" Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.198799 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.199783 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-log" containerID="cri-o://39be800d9e160d536435953354f6bb5e505e86c01d79fb3d3d39867b398ec4d2" gracePeriod=30 Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.199943 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-httpd" containerID="cri-o://367bdb90dd591ec6cd7977726078f8b9a8655aa6b87bba48685684fede46119f" gracePeriod=30 Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.413777 4853 generic.go:334] "Generic (PLEG): container finished" podID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerID="883f301a3c89f8cdc661ac418a0316aecb20c3b253b3b1e5421be1f8735085c2" exitCode=0 Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.413858 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerDied","Data":"883f301a3c89f8cdc661ac418a0316aecb20c3b253b3b1e5421be1f8735085c2"} Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.419473 4853 generic.go:334] "Generic (PLEG): container finished" podID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerID="39be800d9e160d536435953354f6bb5e505e86c01d79fb3d3d39867b398ec4d2" exitCode=143 Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.419533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerDied","Data":"39be800d9e160d536435953354f6bb5e505e86c01d79fb3d3d39867b398ec4d2"} Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.709046 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:47:49 crc kubenswrapper[4853]: I1122 07:47:49.855255 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.007658 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.007853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008069 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008139 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008185 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cv5w\" (UniqueName: \"kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.008569 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd\") pod \"9374c792-71a8-40cf-914f-e91d727ebd5e\" (UID: \"9374c792-71a8-40cf-914f-e91d727ebd5e\") " Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.009077 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.009436 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.009457 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9374c792-71a8-40cf-914f-e91d727ebd5e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.017048 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w" (OuterVolumeSpecName: "kube-api-access-5cv5w") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "kube-api-access-5cv5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.030410 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts" (OuterVolumeSpecName: "scripts") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.102365 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.111807 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.111853 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cv5w\" (UniqueName: \"kubernetes.io/projected/9374c792-71a8-40cf-914f-e91d727ebd5e-kube-api-access-5cv5w\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.111873 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.130213 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.189875 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data" (OuterVolumeSpecName: "config-data") pod "9374c792-71a8-40cf-914f-e91d727ebd5e" (UID: "9374c792-71a8-40cf-914f-e91d727ebd5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.214840 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.214909 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9374c792-71a8-40cf-914f-e91d727ebd5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.436125 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9374c792-71a8-40cf-914f-e91d727ebd5e","Type":"ContainerDied","Data":"a1e8bdb61b5da9f51134cff76e1fb45935b9ec67db4db16f6892883dc7a84ac2"} Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.436217 4853 scope.go:117] "RemoveContainer" containerID="af40682eaa1c1718e621306f5ca644fcab9c0e7fe6cace56875144a26f984a09" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.436478 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.491312 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.509685 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.529409 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:50 crc kubenswrapper[4853]: E1122 07:47:50.530103 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="sg-core" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530135 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="sg-core" Nov 22 07:47:50 crc kubenswrapper[4853]: E1122 07:47:50.530169 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-notification-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530179 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-notification-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: E1122 07:47:50.530192 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="proxy-httpd" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530202 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="proxy-httpd" Nov 22 07:47:50 crc kubenswrapper[4853]: E1122 07:47:50.530230 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-central-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530238 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-central-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530503 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="proxy-httpd" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530542 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-central-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530557 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="sg-core" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.530571 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" containerName="ceilometer-notification-agent" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.533583 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.536394 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.536900 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.537665 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.543174 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.581720 4853 scope.go:117] "RemoveContainer" containerID="f45338c375aac7bd47ae8f10587a84cfee4101df7966d61a5db72d4da96f6179" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.612485 4853 scope.go:117] "RemoveContainer" containerID="883f301a3c89f8cdc661ac418a0316aecb20c3b253b3b1e5421be1f8735085c2" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.628540 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfzbg\" (UniqueName: \"kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.628663 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.628717 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.628973 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.629056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.629093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.629225 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.629268 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.651715 4853 scope.go:117] "RemoveContainer" containerID="5d7bf3d5c94c3b021db2cb4502857185135736b3ad4af1f53484e510b845cc99" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.731605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732392 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732498 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.732622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfzbg\" (UniqueName: \"kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.733706 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.734318 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.736983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.737617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.737621 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.738833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.739380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.752237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfzbg\" (UniqueName: \"kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg\") pod \"ceilometer-0\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " pod="openstack/ceilometer-0" Nov 22 07:47:50 crc kubenswrapper[4853]: I1122 07:47:50.869717 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:47:51 crc kubenswrapper[4853]: I1122 07:47:51.404853 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:51 crc kubenswrapper[4853]: W1122 07:47:51.405198 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be38dfa_2557_43c7_83e8_f554a64db353.slice/crio-7d96b9cfaee3db8786120c37b466a3d7e44696dbba219f3f38cec0acd5eb5128 WatchSource:0}: Error finding container 7d96b9cfaee3db8786120c37b466a3d7e44696dbba219f3f38cec0acd5eb5128: Status 404 returned error can't find the container with id 7d96b9cfaee3db8786120c37b466a3d7e44696dbba219f3f38cec0acd5eb5128 Nov 22 07:47:51 crc kubenswrapper[4853]: I1122 07:47:51.451011 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerStarted","Data":"7d96b9cfaee3db8786120c37b466a3d7e44696dbba219f3f38cec0acd5eb5128"} Nov 22 07:47:51 crc kubenswrapper[4853]: I1122 07:47:51.641453 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:47:51 crc kubenswrapper[4853]: I1122 07:47:51.762736 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9374c792-71a8-40cf-914f-e91d727ebd5e" path="/var/lib/kubelet/pods/9374c792-71a8-40cf-914f-e91d727ebd5e/volumes" Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.465142 4853 generic.go:334] "Generic (PLEG): container finished" podID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerID="367bdb90dd591ec6cd7977726078f8b9a8655aa6b87bba48685684fede46119f" exitCode=0 Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.465386 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerDied","Data":"367bdb90dd591ec6cd7977726078f8b9a8655aa6b87bba48685684fede46119f"} Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.907421 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998112 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998246 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998302 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlcrr\" (UniqueName: \"kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998369 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998402 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998473 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998631 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts\") pod \"e0f23fba-f7c9-48db-a522-d225352bae0b\" (UID: \"e0f23fba-f7c9-48db-a522-d225352bae0b\") " Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.998806 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:52 crc kubenswrapper[4853]: I1122 07:47:52.999124 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs" (OuterVolumeSpecName: "logs") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:52.999780 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:52.999804 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0f23fba-f7c9-48db-a522-d225352bae0b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.014715 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr" (OuterVolumeSpecName: "kube-api-access-zlcrr") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "kube-api-access-zlcrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.017206 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.018045 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts" (OuterVolumeSpecName: "scripts") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.052777 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.098399 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.102127 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.102512 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.102531 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlcrr\" (UniqueName: \"kubernetes.io/projected/e0f23fba-f7c9-48db-a522-d225352bae0b-kube-api-access-zlcrr\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.102543 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.102575 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.108391 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data" (OuterVolumeSpecName: "config-data") pod "e0f23fba-f7c9-48db-a522-d225352bae0b" (UID: "e0f23fba-f7c9-48db-a522-d225352bae0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.141435 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.204937 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.204980 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f23fba-f7c9-48db-a522-d225352bae0b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.490646 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.492380 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e0f23fba-f7c9-48db-a522-d225352bae0b","Type":"ContainerDied","Data":"dd0121dc8b18d87a5833b3fded55cad0bd3c3b1acad232e76320b5bec6181b21"} Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.492483 4853 scope.go:117] "RemoveContainer" containerID="367bdb90dd591ec6cd7977726078f8b9a8655aa6b87bba48685684fede46119f" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.503205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" event={"ID":"c7bb7e8f-c36e-4027-b953-384bff85680b","Type":"ContainerStarted","Data":"a00fb8d47d57f5167eb191ed1e61f773c885900c90935674ac55ac783b8af83d"} Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.517358 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerStarted","Data":"2b9f93c2d4a31a06215ba9cfc32f7192812fa6c3bfd73fb30cff005830ac1d36"} Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.537168 4853 scope.go:117] "RemoveContainer" containerID="39be800d9e160d536435953354f6bb5e505e86c01d79fb3d3d39867b398ec4d2" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.562158 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" podStartSLOduration=3.035117989 podStartE2EDuration="40.562124228s" podCreationTimestamp="2025-11-22 07:47:13 +0000 UTC" firstStartedPulling="2025-11-22 07:47:14.805469393 +0000 UTC m=+2233.646092019" lastFinishedPulling="2025-11-22 07:47:52.332475632 +0000 UTC m=+2271.173098258" observedRunningTime="2025-11-22 07:47:53.527498874 +0000 UTC m=+2272.368121500" watchObservedRunningTime="2025-11-22 07:47:53.562124228 +0000 UTC m=+2272.402746874" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.625646 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.695029 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.741372 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:53 crc kubenswrapper[4853]: E1122 07:47:53.742244 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-httpd" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.742288 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-httpd" Nov 22 07:47:53 crc kubenswrapper[4853]: E1122 07:47:53.742300 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-log" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.742307 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-log" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.742596 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-httpd" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.742638 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" containerName="glance-log" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.744600 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.750655 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.751200 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.790090 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f23fba-f7c9-48db-a522-d225352bae0b" path="/var/lib/kubelet/pods/e0f23fba-f7c9-48db-a522-d225352bae0b/volumes" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.816914 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855089 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855468 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855530 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9dj\" (UniqueName: \"kubernetes.io/projected/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-kube-api-access-hm9dj\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.855850 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.856068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-logs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960118 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-logs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960558 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960727 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-logs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960858 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.960982 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.961289 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9dj\" (UniqueName: \"kubernetes.io/projected/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-kube-api-access-hm9dj\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.961394 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.961506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.961596 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.962549 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.967209 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.974018 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.977575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.981858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:53 crc kubenswrapper[4853]: I1122 07:47:53.992498 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9dj\" (UniqueName: \"kubernetes.io/projected/92c28892-dc0c-4bf5-bd5f-1ed4b702852f-kube-api-access-hm9dj\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.070808 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"92c28892-dc0c-4bf5-bd5f-1ed4b702852f\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.088606 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.535931 4853 generic.go:334] "Generic (PLEG): container finished" podID="289fadd4-7721-4d8e-b33e-35606c18eedb" containerID="b468857845241d2a97ac6d4a96ce7db29071c3d8dac09d53fe2f6aa71460f5dd" exitCode=0 Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.536042 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dzsj4" event={"ID":"289fadd4-7721-4d8e-b33e-35606c18eedb","Type":"ContainerDied","Data":"b468857845241d2a97ac6d4a96ce7db29071c3d8dac09d53fe2f6aa71460f5dd"} Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.545629 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerStarted","Data":"f020ee3caf61620863b2e1acbf7c561cce14516687e91d2a670a0d7f041324b8"} Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.545679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerStarted","Data":"d84e35f92d245ee5516f88e2c02a5a04e0d7668b66b88b8379344b66aaca4198"} Nov 22 07:47:54 crc kubenswrapper[4853]: I1122 07:47:54.729349 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:47:55 crc kubenswrapper[4853]: I1122 07:47:55.558277 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92c28892-dc0c-4bf5-bd5f-1ed4b702852f","Type":"ContainerStarted","Data":"c73d30c8a75dd8e9e21f32087e306e0a25ae0d17e7bab59ab97d4ec06528c42c"} Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.363225 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.436093 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x7k2\" (UniqueName: \"kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2\") pod \"289fadd4-7721-4d8e-b33e-35606c18eedb\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.436428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle\") pod \"289fadd4-7721-4d8e-b33e-35606c18eedb\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.436499 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data\") pod \"289fadd4-7721-4d8e-b33e-35606c18eedb\" (UID: \"289fadd4-7721-4d8e-b33e-35606c18eedb\") " Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.445100 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2" (OuterVolumeSpecName: "kube-api-access-6x7k2") pod "289fadd4-7721-4d8e-b33e-35606c18eedb" (UID: "289fadd4-7721-4d8e-b33e-35606c18eedb"). InnerVolumeSpecName "kube-api-access-6x7k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.453053 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "289fadd4-7721-4d8e-b33e-35606c18eedb" (UID: "289fadd4-7721-4d8e-b33e-35606c18eedb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.500815 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "289fadd4-7721-4d8e-b33e-35606c18eedb" (UID: "289fadd4-7721-4d8e-b33e-35606c18eedb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.539952 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x7k2\" (UniqueName: \"kubernetes.io/projected/289fadd4-7721-4d8e-b33e-35606c18eedb-kube-api-access-6x7k2\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.539996 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.540009 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/289fadd4-7721-4d8e-b33e-35606c18eedb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.572423 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-dzsj4" event={"ID":"289fadd4-7721-4d8e-b33e-35606c18eedb","Type":"ContainerDied","Data":"adb193d85390461d2f5fdd9eaba68b2a61a648931ae5cdb4ce66623a69685122"} Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.572477 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adb193d85390461d2f5fdd9eaba68b2a61a648931ae5cdb4ce66623a69685122" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.572558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-dzsj4" Nov 22 07:47:56 crc kubenswrapper[4853]: I1122 07:47:56.578047 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92c28892-dc0c-4bf5-bd5f-1ed4b702852f","Type":"ContainerStarted","Data":"149bf0bad5e4e1faa85ac2ab97bce2c65c144851c388a0fb4f73100142a74dfc"} Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.398098 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:47:57 crc kubenswrapper[4853]: E1122 07:47:57.399212 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" containerName="barbican-db-sync" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.399238 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" containerName="barbican-db-sync" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.399493 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" containerName="barbican-db-sync" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.401329 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.413284 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.469597 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.472086 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.482332 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.482367 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-kx9vl" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.482544 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.495904 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.497621 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.497686 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.497918 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.498030 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.498085 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwjk7\" (UniqueName: \"kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.498274 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.601812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.601884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwjk7\" (UniqueName: \"kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.601962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602032 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt4k\" (UniqueName: \"kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602082 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602132 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602334 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602427 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602454 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.602621 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.604173 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.604179 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.604177 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.604435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.605341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.635125 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwjk7\" (UniqueName: \"kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7\") pod \"dnsmasq-dns-6d66f584d7-6lnjx\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.708819 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5f67678855-5vc2g"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.711373 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.723462 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.731897 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.732871 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.733036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.733266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.733705 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.733932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.734056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt4k\" (UniqueName: \"kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.741691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.744065 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f67678855-5vc2g"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.758916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.782760 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.797049 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt4k\" (UniqueName: \"kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k\") pod \"barbican-api-78dfc746b4-t4frx\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.812268 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9146abf-7a18-4ae8-a1e8-df3456597edf-logs\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrt7\" (UniqueName: \"kubernetes.io/projected/b9146abf-7a18-4ae8-a1e8-df3456597edf-kube-api-access-tcrt7\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837433 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837456 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-combined-ca-bundle\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data-custom\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.837701 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57684d7498-b46f9"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.843997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.860672 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57684d7498-b46f9"] Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.861480 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afdca33-fd60-4480-b1f7-29ec0199998e-logs\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9146abf-7a18-4ae8-a1e8-df3456597edf-logs\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941534 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941585 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-combined-ca-bundle\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrt7\" (UniqueName: \"kubernetes.io/projected/b9146abf-7a18-4ae8-a1e8-df3456597edf-kube-api-access-tcrt7\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.941980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-combined-ca-bundle\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.942054 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data-custom\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.942084 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd2l4\" (UniqueName: \"kubernetes.io/projected/0afdca33-fd60-4480-b1f7-29ec0199998e-kube-api-access-sd2l4\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.942144 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data-custom\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.942435 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9146abf-7a18-4ae8-a1e8-df3456597edf-logs\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.950467 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data-custom\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.951740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-config-data\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.951858 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9146abf-7a18-4ae8-a1e8-df3456597edf-combined-ca-bundle\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.970557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrt7\" (UniqueName: \"kubernetes.io/projected/b9146abf-7a18-4ae8-a1e8-df3456597edf-kube-api-access-tcrt7\") pod \"barbican-worker-5f67678855-5vc2g\" (UID: \"b9146abf-7a18-4ae8-a1e8-df3456597edf\") " pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:57 crc kubenswrapper[4853]: I1122 07:47:57.997888 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5f67678855-5vc2g" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.045596 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd2l4\" (UniqueName: \"kubernetes.io/projected/0afdca33-fd60-4480-b1f7-29ec0199998e-kube-api-access-sd2l4\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.045674 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data-custom\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.045803 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afdca33-fd60-4480-b1f7-29ec0199998e-logs\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.045964 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.046011 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-combined-ca-bundle\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.047356 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afdca33-fd60-4480-b1f7-29ec0199998e-logs\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.050975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-combined-ca-bundle\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.051062 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data-custom\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.052326 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afdca33-fd60-4480-b1f7-29ec0199998e-config-data\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.071143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd2l4\" (UniqueName: \"kubernetes.io/projected/0afdca33-fd60-4480-b1f7-29ec0199998e-kube-api-access-sd2l4\") pod \"barbican-keystone-listener-57684d7498-b46f9\" (UID: \"0afdca33-fd60-4480-b1f7-29ec0199998e\") " pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.326637 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.510829 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:47:58 crc kubenswrapper[4853]: W1122 07:47:58.572913 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c8cdf7_96d5_43cf_b52b_a3b1d985b7a8.slice/crio-2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9 WatchSource:0}: Error finding container 2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9: Status 404 returned error can't find the container with id 2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9 Nov 22 07:47:58 crc kubenswrapper[4853]: I1122 07:47:58.608230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" event={"ID":"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8","Type":"ContainerStarted","Data":"2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9"} Nov 22 07:47:59 crc kubenswrapper[4853]: I1122 07:47:59.598168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:47:59 crc kubenswrapper[4853]: I1122 07:47:59.622644 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerStarted","Data":"7f414b005f1d4798cddbc1d19673ce8a10d6aca33db0c9bf18db6fb7b588bc75"} Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.469046 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5f67678855-5vc2g"] Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.591571 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.594889 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.606595 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.674367 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57684d7498-b46f9"] Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.698319 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f67678855-5vc2g" event={"ID":"b9146abf-7a18-4ae8-a1e8-df3456597edf","Type":"ContainerStarted","Data":"f906ed01d5f9dc2a00bf6b244b3414f4ab7c0b831d970c1da1b0ed83c7c0ea1a"} Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.722387 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.722547 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.722647 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcg87\" (UniqueName: \"kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.825657 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.825826 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.825911 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcg87\" (UniqueName: \"kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.826911 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.829508 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.853614 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcg87\" (UniqueName: \"kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87\") pod \"community-operators-b4ffw\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:00 crc kubenswrapper[4853]: I1122 07:48:00.928686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.297255 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.297844 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.297911 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.308416 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.308517 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd" gracePeriod=600 Nov 22 07:48:01 crc kubenswrapper[4853]: E1122 07:48:01.612608 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476c875a_2b87_419a_8042_0ba059620fd8.slice/crio-93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod476c875a_2b87_419a_8042_0ba059620fd8.slice/crio-conmon-93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.609072 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:48:01 crc kubenswrapper[4853]: W1122 07:48:01.637550 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod863918c2_c760_4c96_888f_a778bcbb018b.slice/crio-bde6935f3894d331461ef6321f0eb277fa58a3d4a54f531fbbb81ab1202a246f WatchSource:0}: Error finding container bde6935f3894d331461ef6321f0eb277fa58a3d4a54f531fbbb81ab1202a246f: Status 404 returned error can't find the container with id bde6935f3894d331461ef6321f0eb277fa58a3d4a54f531fbbb81ab1202a246f Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.720412 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.721144 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-log" containerID="cri-o://739b31c91720f2ec0951dab78f0a956c3fd5e6b021ba0ebea0f5224904573651" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.721277 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-httpd" containerID="cri-o://2fef05ea5e3d441fe9fb192e15b5ac4bfacf586bae220c5364d42b62e3be6f8f" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.741513 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerStarted","Data":"1c962b5888e6fb853b324a4ba5aec4534e6eb702c77706ed94c453eabfd768ea"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.741707 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.741666 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-central-agent" containerID="cri-o://2b9f93c2d4a31a06215ba9cfc32f7192812fa6c3bfd73fb30cff005830ac1d36" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.742128 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="proxy-httpd" containerID="cri-o://1c962b5888e6fb853b324a4ba5aec4534e6eb702c77706ed94c453eabfd768ea" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.742287 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="sg-core" containerID="cri-o://f020ee3caf61620863b2e1acbf7c561cce14516687e91d2a670a0d7f041324b8" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.742351 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-notification-agent" containerID="cri-o://d84e35f92d245ee5516f88e2c02a5a04e0d7668b66b88b8379344b66aaca4198" gracePeriod=30 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.745611 4853 generic.go:334] "Generic (PLEG): container finished" podID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerID="d918d370ed863051395cd254e6a85946d7ea846df44e4c12a1bb09910d401cf5" exitCode=0 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.745709 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" event={"ID":"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8","Type":"ContainerDied","Data":"d918d370ed863051395cd254e6a85946d7ea846df44e4c12a1bb09910d401cf5"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.797946 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerStarted","Data":"5154f2248032d3b204066e6d3d4b29c26f050e50ddfa355693e793433a3a28e6"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.798302 4853 generic.go:334] "Generic (PLEG): container finished" podID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" containerID="9b54259d55869e27ba8f9e308f53955791c1604ec2b276eeb471d9425fefad38" exitCode=0 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.798379 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnfsq" event={"ID":"29d503fd-37f2-453c-aba9-5d2fb2c6aad0","Type":"ContainerDied","Data":"9b54259d55869e27ba8f9e308f53955791c1604ec2b276eeb471d9425fefad38"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.860065 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"92c28892-dc0c-4bf5-bd5f-1ed4b702852f","Type":"ContainerStarted","Data":"965874ecc4b273a3b73e0be7e5feda0222973024930fe5930c4398cf3e625ced"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.875271 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerStarted","Data":"bde6935f3894d331461ef6321f0eb277fa58a3d4a54f531fbbb81ab1202a246f"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.890949 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.534458581 podStartE2EDuration="11.890910404s" podCreationTimestamp="2025-11-22 07:47:50 +0000 UTC" firstStartedPulling="2025-11-22 07:47:51.408163373 +0000 UTC m=+2270.248785999" lastFinishedPulling="2025-11-22 07:47:59.764615196 +0000 UTC m=+2278.605237822" observedRunningTime="2025-11-22 07:48:01.779559931 +0000 UTC m=+2280.620182547" watchObservedRunningTime="2025-11-22 07:48:01.890910404 +0000 UTC m=+2280.731533030" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.905967 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" event={"ID":"0afdca33-fd60-4480-b1f7-29ec0199998e","Type":"ContainerStarted","Data":"b7e7ae3fe9e2d56090b232049ce92a212e0f3c9befa39f8e9f7f7a0ef2f62ff0"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.946530 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd" exitCode=0 Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.946933 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd"} Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.947055 4853 scope.go:117] "RemoveContainer" containerID="1e441baa32afe9d9e98e192afda6a47d949c68ceb8edb814ef148dd3d84f45b1" Nov 22 07:48:01 crc kubenswrapper[4853]: I1122 07:48:01.961819 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.961795456 podStartE2EDuration="8.961795456s" podCreationTimestamp="2025-11-22 07:47:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:01.908836047 +0000 UTC m=+2280.749458673" watchObservedRunningTime="2025-11-22 07:48:01.961795456 +0000 UTC m=+2280.802418082" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.127867 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d58855874-6hg9r"] Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.130329 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.133041 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.133124 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.140408 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d58855874-6hg9r"] Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.174921 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85nm\" (UniqueName: \"kubernetes.io/projected/1ea82711-6541-4717-8711-16a13f6ce28c-kube-api-access-w85nm\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175015 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea82711-6541-4717-8711-16a13f6ce28c-logs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175075 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-public-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175135 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data-custom\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175185 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-combined-ca-bundle\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.175224 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-internal-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.281703 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-combined-ca-bundle\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.281948 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-internal-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.282113 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85nm\" (UniqueName: \"kubernetes.io/projected/1ea82711-6541-4717-8711-16a13f6ce28c-kube-api-access-w85nm\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.282194 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea82711-6541-4717-8711-16a13f6ce28c-logs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.282233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.282250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-public-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.282325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data-custom\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.283319 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ea82711-6541-4717-8711-16a13f6ce28c-logs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.287875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-internal-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.288562 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-public-tls-certs\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.288848 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data-custom\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.290407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-combined-ca-bundle\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.291465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea82711-6541-4717-8711-16a13f6ce28c-config-data\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.303928 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85nm\" (UniqueName: \"kubernetes.io/projected/1ea82711-6541-4717-8711-16a13f6ce28c-kube-api-access-w85nm\") pod \"barbican-api-d58855874-6hg9r\" (UID: \"1ea82711-6541-4717-8711-16a13f6ce28c\") " pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.506001 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.977168 4853 generic.go:334] "Generic (PLEG): container finished" podID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerID="739b31c91720f2ec0951dab78f0a956c3fd5e6b021ba0ebea0f5224904573651" exitCode=143 Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.977660 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerDied","Data":"739b31c91720f2ec0951dab78f0a956c3fd5e6b021ba0ebea0f5224904573651"} Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.985405 4853 generic.go:334] "Generic (PLEG): container finished" podID="7be38dfa-2557-43c7-83e8-f554a64db353" containerID="1c962b5888e6fb853b324a4ba5aec4534e6eb702c77706ed94c453eabfd768ea" exitCode=0 Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.985445 4853 generic.go:334] "Generic (PLEG): container finished" podID="7be38dfa-2557-43c7-83e8-f554a64db353" containerID="f020ee3caf61620863b2e1acbf7c561cce14516687e91d2a670a0d7f041324b8" exitCode=2 Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.985703 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerDied","Data":"1c962b5888e6fb853b324a4ba5aec4534e6eb702c77706ed94c453eabfd768ea"} Nov 22 07:48:02 crc kubenswrapper[4853]: I1122 07:48:02.985737 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerDied","Data":"f020ee3caf61620863b2e1acbf7c561cce14516687e91d2a670a0d7f041324b8"} Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.028499 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d58855874-6hg9r"] Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.743799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.774533 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s887s\" (UniqueName: \"kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.776001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.776077 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.776601 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.777087 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.777140 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id\") pod \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\" (UID: \"29d503fd-37f2-453c-aba9-5d2fb2c6aad0\") " Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.782825 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.787031 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.793347 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s" (OuterVolumeSpecName: "kube-api-access-s887s") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "kube-api-access-s887s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.800058 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts" (OuterVolumeSpecName: "scripts") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.865949 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.881610 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.881647 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.881663 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.881672 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s887s\" (UniqueName: \"kubernetes.io/projected/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-kube-api-access-s887s\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.881686 4853 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.909895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data" (OuterVolumeSpecName: "config-data") pod "29d503fd-37f2-453c-aba9-5d2fb2c6aad0" (UID: "29d503fd-37f2-453c-aba9-5d2fb2c6aad0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:03 crc kubenswrapper[4853]: I1122 07:48:03.984587 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29d503fd-37f2-453c-aba9-5d2fb2c6aad0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.002974 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerStarted","Data":"ace2b700e3d3a46d1d7ea675ab99d8a413032344f6da2811f7ecc40159e7e333"} Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.005269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nnfsq" event={"ID":"29d503fd-37f2-453c-aba9-5d2fb2c6aad0","Type":"ContainerDied","Data":"1b6c52b8644e6c1d011e16f74cecda19da93422401ceaf90e46d7ac2ffb7d1de"} Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.005320 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b6c52b8644e6c1d011e16f74cecda19da93422401ceaf90e46d7ac2ffb7d1de" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.005408 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nnfsq" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.008956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d58855874-6hg9r" event={"ID":"1ea82711-6541-4717-8711-16a13f6ce28c","Type":"ContainerStarted","Data":"e90cd961695004451e9f2f26fe73bc6c6e8a748ab4f1b20c9fd6f423ce20d65e"} Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.091279 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.091820 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.157207 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.171133 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.261905 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:04 crc kubenswrapper[4853]: E1122 07:48:04.262780 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" containerName="cinder-db-sync" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.262800 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" containerName="cinder-db-sync" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.263098 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" containerName="cinder-db-sync" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.279917 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.280052 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.284272 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.284348 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lmnjv" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.284531 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.288445 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299231 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxb94\" (UniqueName: \"kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299325 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299379 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299396 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.299504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.404948 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.405037 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.405057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.405114 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.405276 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.405303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxb94\" (UniqueName: \"kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.408205 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.409317 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.422589 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.424660 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.425251 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.430717 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.459927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxb94\" (UniqueName: \"kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94\") pod \"cinder-scheduler-0\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.501376 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.504311 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516442 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516474 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516579 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516685 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.516711 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlvc\" (UniqueName: \"kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.573738 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.616312 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.626615 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.626872 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.626932 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlvc\" (UniqueName: \"kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.627243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.627269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.627322 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.628636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.629245 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.629804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.630852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.630907 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.660454 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlvc\" (UniqueName: \"kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc\") pod \"dnsmasq-dns-674b76c99f-c5t2f\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.723057 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.725771 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.732476 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.737286 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.834355 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.835474 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.835608 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.835940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.836253 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8t6b\" (UniqueName: \"kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.837539 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.837589 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.911532 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.949071 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.950897 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.951011 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.951263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.951478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8t6b\" (UniqueName: \"kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.951649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.951695 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.952064 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:04 crc kubenswrapper[4853]: I1122 07:48:04.953901 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.008303 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.017572 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.017611 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.020582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.050668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8t6b\" (UniqueName: \"kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b\") pod \"cinder-api-0\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.097751 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.139743 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerStarted","Data":"eb48bec721ed10b57f756d146b62cc3cca0429039b2b396723241552af1886a7"} Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.207273 4853 generic.go:334] "Generic (PLEG): container finished" podID="7be38dfa-2557-43c7-83e8-f554a64db353" containerID="d84e35f92d245ee5516f88e2c02a5a04e0d7668b66b88b8379344b66aaca4198" exitCode=0 Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.209185 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerDied","Data":"d84e35f92d245ee5516f88e2c02a5a04e0d7668b66b88b8379344b66aaca4198"} Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.209252 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.209269 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.209280 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.209946 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.266342 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78dfc746b4-t4frx" podStartSLOduration=8.266314902 podStartE2EDuration="8.266314902s" podCreationTimestamp="2025-11-22 07:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:05.238676126 +0000 UTC m=+2284.079298752" watchObservedRunningTime="2025-11-22 07:48:05.266314902 +0000 UTC m=+2284.106937528" Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.429438 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:05 crc kubenswrapper[4853]: I1122 07:48:05.969580 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:06 crc kubenswrapper[4853]: I1122 07:48:06.232735 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerStarted","Data":"2a54fd78884c087d847d83b4519e5be3fd405055dc8ffb8b5c0586a08a7bfe31"} Nov 22 07:48:06 crc kubenswrapper[4853]: I1122 07:48:06.242421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerStarted","Data":"8ccee1cf7904a68665db710a4796c04a18f0765ee61175b3fe4a268d6d7d5d6c"} Nov 22 07:48:06 crc kubenswrapper[4853]: I1122 07:48:06.248312 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" event={"ID":"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8","Type":"ContainerStarted","Data":"6687c913de7d55c949e6989f4e689e8419afbc7e2e9a9c1870d27fdcc48c5932"} Nov 22 07:48:06 crc kubenswrapper[4853]: I1122 07:48:06.286310 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:06 crc kubenswrapper[4853]: I1122 07:48:06.695235 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:07 crc kubenswrapper[4853]: I1122 07:48:07.272107 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerDied","Data":"eb48bec721ed10b57f756d146b62cc3cca0429039b2b396723241552af1886a7"} Nov 22 07:48:07 crc kubenswrapper[4853]: I1122 07:48:07.271909 4853 generic.go:334] "Generic (PLEG): container finished" podID="863918c2-c760-4c96-888f-a778bcbb018b" containerID="eb48bec721ed10b57f756d146b62cc3cca0429039b2b396723241552af1886a7" exitCode=0 Nov 22 07:48:07 crc kubenswrapper[4853]: I1122 07:48:07.274920 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:07 crc kubenswrapper[4853]: I1122 07:48:07.274951 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:07 crc kubenswrapper[4853]: I1122 07:48:07.276244 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" event={"ID":"0dd1e1e8-e796-4ad0-96de-526e8b847c61","Type":"ContainerStarted","Data":"e286a9544c5740f7e1fa8be101d55ddba3a4dd625c6da76876436d8ccdd4b747"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.330163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d58855874-6hg9r" event={"ID":"1ea82711-6541-4717-8711-16a13f6ce28c","Type":"ContainerStarted","Data":"d51bf3671ea7c8a7a9e8e6b7b01df6dd3718849434646d79a9ce07532bab06bb"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.341011 4853 generic.go:334] "Generic (PLEG): container finished" podID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerID="2fef05ea5e3d441fe9fb192e15b5ac4bfacf586bae220c5364d42b62e3be6f8f" exitCode=0 Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.341135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerDied","Data":"2fef05ea5e3d441fe9fb192e15b5ac4bfacf586bae220c5364d42b62e3be6f8f"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.353913 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.358188 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerStarted","Data":"151cef4835ee16ee65aa991da3fb752907af13b30ebe8f99644e65ab26befcf2"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.362000 4853 generic.go:334] "Generic (PLEG): container finished" podID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerID="4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65" exitCode=0 Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.362280 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" event={"ID":"0dd1e1e8-e796-4ad0-96de-526e8b847c61","Type":"ContainerDied","Data":"4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.369483 4853 generic.go:334] "Generic (PLEG): container finished" podID="7be38dfa-2557-43c7-83e8-f554a64db353" containerID="2b9f93c2d4a31a06215ba9cfc32f7192812fa6c3bfd73fb30cff005830ac1d36" exitCode=0 Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.369670 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="dnsmasq-dns" containerID="cri-o://6687c913de7d55c949e6989f4e689e8419afbc7e2e9a9c1870d27fdcc48c5932" gracePeriod=10 Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.370003 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerDied","Data":"2b9f93c2d4a31a06215ba9cfc32f7192812fa6c3bfd73fb30cff005830ac1d36"} Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.371361 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.445009 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" podStartSLOduration=11.444932373 podStartE2EDuration="11.444932373s" podCreationTimestamp="2025-11-22 07:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:08.438566161 +0000 UTC m=+2287.279188797" watchObservedRunningTime="2025-11-22 07:48:08.444932373 +0000 UTC m=+2287.285554999" Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.751546 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.764708 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.764869 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:08 crc kubenswrapper[4853]: I1122 07:48:08.792503 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.252312 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.398151 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7be38dfa-2557-43c7-83e8-f554a64db353","Type":"ContainerDied","Data":"7d96b9cfaee3db8786120c37b466a3d7e44696dbba219f3f38cec0acd5eb5128"} Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.398237 4853 scope.go:117] "RemoveContainer" containerID="1c962b5888e6fb853b324a4ba5aec4534e6eb702c77706ed94c453eabfd768ea" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.398464 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.410862 4853 generic.go:334] "Generic (PLEG): container finished" podID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerID="6687c913de7d55c949e6989f4e689e8419afbc7e2e9a9c1870d27fdcc48c5932" exitCode=0 Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.411228 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" event={"ID":"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8","Type":"ContainerDied","Data":"6687c913de7d55c949e6989f4e689e8419afbc7e2e9a9c1870d27fdcc48c5932"} Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.429845 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430020 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfzbg\" (UniqueName: \"kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430246 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430311 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430451 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.430510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle\") pod \"7be38dfa-2557-43c7-83e8-f554a64db353\" (UID: \"7be38dfa-2557-43c7-83e8-f554a64db353\") " Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.434293 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.441408 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.446311 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts" (OuterVolumeSpecName: "scripts") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.470678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg" (OuterVolumeSpecName: "kube-api-access-nfzbg") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "kube-api-access-nfzbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.498459 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.523110 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534482 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534531 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534547 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534559 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7be38dfa-2557-43c7-83e8-f554a64db353-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534574 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.534588 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfzbg\" (UniqueName: \"kubernetes.io/projected/7be38dfa-2557-43c7-83e8-f554a64db353-kube-api-access-nfzbg\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.614086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.614575 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data" (OuterVolumeSpecName: "config-data") pod "7be38dfa-2557-43c7-83e8-f554a64db353" (UID: "7be38dfa-2557-43c7-83e8-f554a64db353"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.637396 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.637443 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be38dfa-2557-43c7-83e8-f554a64db353-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.772062 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.782459 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.801422 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:09 crc kubenswrapper[4853]: E1122 07:48:09.802193 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="proxy-httpd" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802234 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="proxy-httpd" Nov 22 07:48:09 crc kubenswrapper[4853]: E1122 07:48:09.802252 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="sg-core" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802258 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="sg-core" Nov 22 07:48:09 crc kubenswrapper[4853]: E1122 07:48:09.802286 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-central-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802292 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-central-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: E1122 07:48:09.802345 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-notification-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802352 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-notification-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802598 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-central-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802618 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="proxy-httpd" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802637 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="ceilometer-notification-agent" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.802660 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" containerName="sg-core" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.805727 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.813465 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.814299 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.821889 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.826629 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958056 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958174 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958359 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhpx\" (UniqueName: \"kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958546 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:09 crc kubenswrapper[4853]: I1122 07:48:09.958655 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.061807 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.062293 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.062357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.062509 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhpx\" (UniqueName: \"kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.062876 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.063063 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.062544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.063187 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.063263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.063384 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.071610 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.074473 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.074602 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.082159 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.083831 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.084575 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhpx\" (UniqueName: \"kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx\") pod \"ceilometer-0\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.221316 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.419539 4853 scope.go:117] "RemoveContainer" containerID="f020ee3caf61620863b2e1acbf7c561cce14516687e91d2a670a0d7f041324b8" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.458915 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" event={"ID":"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8","Type":"ContainerDied","Data":"2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9"} Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.459220 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2307fc8e6071eef6c01165f890c47afb9e7851e847f37125b3102b07edf44ea9" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.463688 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"39d11b3b-9490-41d8-87ad-542cddb9cc6b","Type":"ContainerDied","Data":"7c54ef80b82e20adc18b4a8f2a07debc2cef5f80a2023402f557fccb076bfa46"} Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.463759 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c54ef80b82e20adc18b4a8f2a07debc2cef5f80a2023402f557fccb076bfa46" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.574953 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.579036 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.679984 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680283 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680369 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwjk7\" (UniqueName: \"kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680480 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680537 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680588 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680653 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680686 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config\") pod \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\" (UID: \"25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680801 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgplx\" (UniqueName: \"kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680857 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.680880 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.681460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.681975 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.682808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs" (OuterVolumeSpecName: "logs") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.770158 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx" (OuterVolumeSpecName: "kube-api-access-zgplx") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "kube-api-access-zgplx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.770662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.770830 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7" (OuterVolumeSpecName: "kube-api-access-fwjk7") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "kube-api-access-fwjk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.786779 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts" (OuterVolumeSpecName: "scripts") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.788345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") pod \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\" (UID: \"39d11b3b-9490-41d8-87ad-542cddb9cc6b\") " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.789063 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgplx\" (UniqueName: \"kubernetes.io/projected/39d11b3b-9490-41d8-87ad-542cddb9cc6b-kube-api-access-zgplx\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.789092 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39d11b3b-9490-41d8-87ad-542cddb9cc6b-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.789122 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.789138 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwjk7\" (UniqueName: \"kubernetes.io/projected/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-kube-api-access-fwjk7\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4853]: W1122 07:48:10.789388 4853 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/39d11b3b-9490-41d8-87ad-542cddb9cc6b/volumes/kubernetes.io~secret/scripts Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.789423 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts" (OuterVolumeSpecName: "scripts") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.892297 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.951154 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.973477 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:10 crc kubenswrapper[4853]: I1122 07:48:10.978980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.002780 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.002835 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.002851 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.006160 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config" (OuterVolumeSpecName: "config") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.019124 4853 scope.go:117] "RemoveContainer" containerID="d84e35f92d245ee5516f88e2c02a5a04e0d7668b66b88b8379344b66aaca4198" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.025462 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.029236 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.055819 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data" (OuterVolumeSpecName: "config-data") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.078091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "39d11b3b-9490-41d8-87ad-542cddb9cc6b" (UID: "39d11b3b-9490-41d8-87ad-542cddb9cc6b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.092642 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" (UID: "25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105647 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105691 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105703 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105713 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105722 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39d11b3b-9490-41d8-87ad-542cddb9cc6b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.105734 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.395339 4853 scope.go:117] "RemoveContainer" containerID="2b9f93c2d4a31a06215ba9cfc32f7192812fa6c3bfd73fb30cff005830ac1d36" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.621112 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.624529 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d58855874-6hg9r" event={"ID":"1ea82711-6541-4717-8711-16a13f6ce28c","Type":"ContainerStarted","Data":"da63265b20fbed6353b7be103fb3944f30d4afe9f8a9c85ccc0208b258d867ec"} Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.624582 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.624658 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-6lnjx" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.628661 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.709071 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:11 crc kubenswrapper[4853]: I1122 07:48:11.765060 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d58855874-6hg9r" podStartSLOduration=9.765017069 podStartE2EDuration="9.765017069s" podCreationTimestamp="2025-11-22 07:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:11.665402062 +0000 UTC m=+2290.506024688" watchObservedRunningTime="2025-11-22 07:48:11.765017069 +0000 UTC m=+2290.605639695" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:11.984685 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be38dfa-2557-43c7-83e8-f554a64db353" path="/var/lib/kubelet/pods/7be38dfa-2557-43c7-83e8-f554a64db353/volumes" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:11.999489 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.199826 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.277056 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.307885 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.326080 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-6lnjx"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.369393 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:12 crc kubenswrapper[4853]: E1122 07:48:12.380819 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-httpd" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.380879 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-httpd" Nov 22 07:48:12 crc kubenswrapper[4853]: E1122 07:48:12.380901 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="init" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.380911 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="init" Nov 22 07:48:12 crc kubenswrapper[4853]: E1122 07:48:12.380960 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-log" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.380970 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-log" Nov 22 07:48:12 crc kubenswrapper[4853]: E1122 07:48:12.381029 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="dnsmasq-dns" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.381039 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="dnsmasq-dns" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.381559 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" containerName="dnsmasq-dns" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.381592 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-httpd" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.381623 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" containerName="glance-log" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.383852 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.394819 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.404998 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.405384 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.592597 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593006 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-logs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593037 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-config-data\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593059 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-scripts\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593119 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593204 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g486\" (UniqueName: \"kubernetes.io/projected/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-kube-api-access-9g486\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.593424 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.692952 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerStarted","Data":"363c01c152e347ddb2f316c9a8dd3bc81ba22bce92f223743bd3b59eeb167bf7"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.696910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697218 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-logs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697353 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-config-data\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-scripts\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697431 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697515 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g486\" (UniqueName: \"kubernetes.io/projected/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-kube-api-access-9g486\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697562 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.697605 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.698408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-logs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.699069 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.699568 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerStarted","Data":"e4b929b6b2c6ec6f2cec73d9b84f4a13113d088987c887b0762e34106a97eb58"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.702606 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.735215 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f67678855-5vc2g" event={"ID":"b9146abf-7a18-4ae8-a1e8-df3456597edf","Type":"ContainerStarted","Data":"9b75bd4dc311348dad5fa44e7dbdc4f868f6e4bb5c00e374888e2aefe99088bb"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.735287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5f67678855-5vc2g" event={"ID":"b9146abf-7a18-4ae8-a1e8-df3456597edf","Type":"ContainerStarted","Data":"06b923c2d23648f26b73d07a5549cddcbf8d46e106c3a08404da70dbf74b14d5"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.740450 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.741423 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-scripts\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.746475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerStarted","Data":"2a0ae2704879a3a68f2954be6b02bdf86ba699bf28df386ae8933fb182211f48"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.746694 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api-log" containerID="cri-o://151cef4835ee16ee65aa991da3fb752907af13b30ebe8f99644e65ab26befcf2" gracePeriod=30 Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.747048 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.747677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g486\" (UniqueName: \"kubernetes.io/projected/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-kube-api-access-9g486\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.747818 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api" containerID="cri-o://2a0ae2704879a3a68f2954be6b02bdf86ba699bf28df386ae8933fb182211f48" gracePeriod=30 Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.750474 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.750803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554d3b5-8219-4dc0-9f3e-cb1ee319ef72-config-data\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.790960 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" event={"ID":"0dd1e1e8-e796-4ad0-96de-526e8b847c61","Type":"ContainerStarted","Data":"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.791460 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.799368 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.799347686 podStartE2EDuration="8.799347686s" podCreationTimestamp="2025-11-22 07:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:12.790430236 +0000 UTC m=+2291.631052882" watchObservedRunningTime="2025-11-22 07:48:12.799347686 +0000 UTC m=+2291.639970312" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.814862 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72\") " pod="openstack/glance-default-external-api-0" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.817493 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" podStartSLOduration=8.817452384 podStartE2EDuration="8.817452384s" podCreationTimestamp="2025-11-22 07:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:12.816346144 +0000 UTC m=+2291.656968790" watchObservedRunningTime="2025-11-22 07:48:12.817452384 +0000 UTC m=+2291.658075030" Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.822636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" event={"ID":"0afdca33-fd60-4480-b1f7-29ec0199998e","Type":"ContainerStarted","Data":"d985d5a0b500d9a8aec7740d99b3a8ab729a8191882b8936789a1f217f3b61cf"} Nov 22 07:48:12 crc kubenswrapper[4853]: I1122 07:48:12.822692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" event={"ID":"0afdca33-fd60-4480-b1f7-29ec0199998e","Type":"ContainerStarted","Data":"62ed877f1f7da18d4038226752237993fc08ed6c5cb8806545d352b23c1e7684"} Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.120604 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.172264 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.814730 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8" path="/var/lib/kubelet/pods/25c8cdf7-96d5-43cf-b52b-a3b1d985b7a8/volumes" Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.830152 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d11b3b-9490-41d8-87ad-542cddb9cc6b" path="/var/lib/kubelet/pods/39d11b3b-9490-41d8-87ad-542cddb9cc6b/volumes" Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966565 4853 generic.go:334] "Generic (PLEG): container finished" podID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerID="2a0ae2704879a3a68f2954be6b02bdf86ba699bf28df386ae8933fb182211f48" exitCode=0 Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966610 4853 generic.go:334] "Generic (PLEG): container finished" podID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerID="151cef4835ee16ee65aa991da3fb752907af13b30ebe8f99644e65ab26befcf2" exitCode=143 Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966808 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerDied","Data":"2a0ae2704879a3a68f2954be6b02bdf86ba699bf28df386ae8933fb182211f48"} Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966858 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerDied","Data":"151cef4835ee16ee65aa991da3fb752907af13b30ebe8f99644e65ab26befcf2"} Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966870 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb91c81d-f604-490f-8397-3f7e5b24236f","Type":"ContainerDied","Data":"2a54fd78884c087d847d83b4519e5be3fd405055dc8ffb8b5c0586a08a7bfe31"} Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.966880 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a54fd78884c087d847d83b4519e5be3fd405055dc8ffb8b5c0586a08a7bfe31" Nov 22 07:48:13 crc kubenswrapper[4853]: I1122 07:48:13.983128 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerStarted","Data":"2bf0cc77fb98b8b15644a8683515773bb6e68032be660da1deb3a644d7d5cf49"} Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.003483 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a08a523-61a0-4155-b389-0491bcd97e84" containerID="a3861ced43ef558639f77a20e56162a89c124dd3bbfd3b4e531cc643f8fdcea1" exitCode=0 Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.003660 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7xksh" event={"ID":"5a08a523-61a0-4155-b389-0491bcd97e84","Type":"ContainerDied","Data":"a3861ced43ef558639f77a20e56162a89c124dd3bbfd3b4e531cc643f8fdcea1"} Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.006118 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.011137 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.011590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerStarted","Data":"c3bec8b912b07b45c0217eff4ae1d88a1901442f6b68c6ad4e6cdf0a17a44feb"} Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.073313 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.073870 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8t6b\" (UniqueName: \"kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.074019 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.074135 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.074211 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.074240 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.074351 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data\") pod \"eb91c81d-f604-490f-8397-3f7e5b24236f\" (UID: \"eb91c81d-f604-490f-8397-3f7e5b24236f\") " Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.075556 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.081232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs" (OuterVolumeSpecName: "logs") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.089617 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb91c81d-f604-490f-8397-3f7e5b24236f-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.089653 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb91c81d-f604-490f-8397-3f7e5b24236f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.168873 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.176799 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5f67678855-5vc2g" podStartSLOduration=7.2147266 podStartE2EDuration="17.176770987s" podCreationTimestamp="2025-11-22 07:47:57 +0000 UTC" firstStartedPulling="2025-11-22 07:48:00.485682844 +0000 UTC m=+2279.326305470" lastFinishedPulling="2025-11-22 07:48:10.447727231 +0000 UTC m=+2289.288349857" observedRunningTime="2025-11-22 07:48:14.049506384 +0000 UTC m=+2292.890129010" watchObservedRunningTime="2025-11-22 07:48:14.176770987 +0000 UTC m=+2293.017393613" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.183091 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b" (OuterVolumeSpecName: "kube-api-access-g8t6b") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "kube-api-access-g8t6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.184673 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts" (OuterVolumeSpecName: "scripts") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.203909 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.203965 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.203986 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8t6b\" (UniqueName: \"kubernetes.io/projected/eb91c81d-f604-490f-8397-3f7e5b24236f-kube-api-access-g8t6b\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.226437 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57684d7498-b46f9" podStartSLOduration=6.8208049939999995 podStartE2EDuration="17.226411415s" podCreationTimestamp="2025-11-22 07:47:57 +0000 UTC" firstStartedPulling="2025-11-22 07:48:00.695333437 +0000 UTC m=+2279.535956063" lastFinishedPulling="2025-11-22 07:48:11.100939858 +0000 UTC m=+2289.941562484" observedRunningTime="2025-11-22 07:48:14.111308651 +0000 UTC m=+2292.951931287" watchObservedRunningTime="2025-11-22 07:48:14.226411415 +0000 UTC m=+2293.067034041" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.295019 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data" (OuterVolumeSpecName: "config-data") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.313939 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.354969 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb91c81d-f604-490f-8397-3f7e5b24236f" (UID: "eb91c81d-f604-490f-8397-3f7e5b24236f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.418429 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb91c81d-f604-490f-8397-3f7e5b24236f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:14 crc kubenswrapper[4853]: I1122 07:48:14.422287 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:48:14 crc kubenswrapper[4853]: W1122 07:48:14.430551 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554d3b5_8219_4dc0_9f3e_cb1ee319ef72.slice/crio-cd8c030120f2ff11be24b4d06a2885e3a64aec7750e4db51f0a52ae33c8e0897 WatchSource:0}: Error finding container cd8c030120f2ff11be24b4d06a2885e3a64aec7750e4db51f0a52ae33c8e0897: Status 404 returned error can't find the container with id cd8c030120f2ff11be24b4d06a2885e3a64aec7750e4db51f0a52ae33c8e0897 Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.057966 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerStarted","Data":"5cb43eed56a3ee85131d128b4f0c23114281f10b0f2c794124f82c80367a9968"} Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.060764 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72","Type":"ContainerStarted","Data":"cd8c030120f2ff11be24b4d06a2885e3a64aec7750e4db51f0a52ae33c8e0897"} Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.060889 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.126809 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.147304 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.160965 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:15 crc kubenswrapper[4853]: E1122 07:48:15.161869 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.161888 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api" Nov 22 07:48:15 crc kubenswrapper[4853]: E1122 07:48:15.161899 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api-log" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.161907 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api-log" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.162150 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api-log" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.162184 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" containerName="cinder-api" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.163558 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.169442 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.170399 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.170580 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.182894 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.242786 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.242858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.242915 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2r29\" (UniqueName: \"kubernetes.io/projected/efb7e269-fe2e-45b4-949e-8f862ef94e3c-kube-api-access-d2r29\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.243286 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-scripts\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.243400 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efb7e269-fe2e-45b4-949e-8f862ef94e3c-logs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.243720 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efb7e269-fe2e-45b4-949e-8f862ef94e3c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.243783 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.244166 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data-custom\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.244215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.346765 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efb7e269-fe2e-45b4-949e-8f862ef94e3c-logs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.346945 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.346981 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efb7e269-fe2e-45b4-949e-8f862ef94e3c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347140 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data-custom\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347182 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347280 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/efb7e269-fe2e-45b4-949e-8f862ef94e3c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347405 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347531 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2r29\" (UniqueName: \"kubernetes.io/projected/efb7e269-fe2e-45b4-949e-8f862ef94e3c-kube-api-access-d2r29\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347742 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-scripts\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.347338 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efb7e269-fe2e-45b4-949e-8f862ef94e3c-logs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.358744 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-scripts\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.360227 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.362834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.398476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.399061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.411190 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/efb7e269-fe2e-45b4-949e-8f862ef94e3c-config-data-custom\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.424866 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2r29\" (UniqueName: \"kubernetes.io/projected/efb7e269-fe2e-45b4-949e-8f862ef94e3c-kube-api-access-d2r29\") pod \"cinder-api-0\" (UID: \"efb7e269-fe2e-45b4-949e-8f862ef94e3c\") " pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.495127 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.847308 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb91c81d-f604-490f-8397-3f7e5b24236f" path="/var/lib/kubelet/pods/eb91c81d-f604-490f-8397-3f7e5b24236f/volumes" Nov 22 07:48:15 crc kubenswrapper[4853]: I1122 07:48:15.953528 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7xksh" Nov 22 07:48:16 crc kubenswrapper[4853]: I1122 07:48:16.092853 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzw9z\" (UniqueName: \"kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z\") pod \"5a08a523-61a0-4155-b389-0491bcd97e84\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " Nov 22 07:48:16 crc kubenswrapper[4853]: I1122 07:48:16.093352 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle\") pod \"5a08a523-61a0-4155-b389-0491bcd97e84\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " Nov 22 07:48:16 crc kubenswrapper[4853]: I1122 07:48:16.093567 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data\") pod \"5a08a523-61a0-4155-b389-0491bcd97e84\" (UID: \"5a08a523-61a0-4155-b389-0491bcd97e84\") " Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.106200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72","Type":"ContainerStarted","Data":"48bd614c635de1f59e197d3844ced4b6870f1449247e4c03148de7de929a6e2e"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.122078 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z" (OuterVolumeSpecName: "kube-api-access-bzw9z") pod "5a08a523-61a0-4155-b389-0491bcd97e84" (UID: "5a08a523-61a0-4155-b389-0491bcd97e84"). InnerVolumeSpecName "kube-api-access-bzw9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.128605 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerStarted","Data":"af91ce2533940e8384953db542379aa666858d68f2cf600fcd358bf7ce7b0b8a"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.161680 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-7xksh" event={"ID":"5a08a523-61a0-4155-b389-0491bcd97e84","Type":"ContainerDied","Data":"067617a9b0bee0fa201dda123132256bd1cf576df75f879c9d0a0ec2ea823094"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.161721 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="067617a9b0bee0fa201dda123132256bd1cf576df75f879c9d0a0ec2ea823094" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.162554 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-7xksh" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.168980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a08a523-61a0-4155-b389-0491bcd97e84" (UID: "5a08a523-61a0-4155-b389-0491bcd97e84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.177592 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.220450061 podStartE2EDuration="12.177553681s" podCreationTimestamp="2025-11-22 07:48:04 +0000 UTC" firstStartedPulling="2025-11-22 07:48:05.438878916 +0000 UTC m=+2284.279501542" lastFinishedPulling="2025-11-22 07:48:11.395982536 +0000 UTC m=+2290.236605162" observedRunningTime="2025-11-22 07:48:16.158880498 +0000 UTC m=+2294.999503144" watchObservedRunningTime="2025-11-22 07:48:16.177553681 +0000 UTC m=+2295.018176307" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.217324 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzw9z\" (UniqueName: \"kubernetes.io/projected/5a08a523-61a0-4155-b389-0491bcd97e84-kube-api-access-bzw9z\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.217359 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.326288 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.380871 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data" (OuterVolumeSpecName: "config-data") pod "5a08a523-61a0-4155-b389-0491bcd97e84" (UID: "5a08a523-61a0-4155-b389-0491bcd97e84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.416055 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:16.426472 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a08a523-61a0-4155-b389-0491bcd97e84-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:17.196603 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"efb7e269-fe2e-45b4-949e-8f862ef94e3c","Type":"ContainerStarted","Data":"4593f9045ed9e130dba2f0999d8afe8036b87561fa32ee4dc1af0289bdbcf3a4"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.210775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5554d3b5-8219-4dc0-9f3e-cb1ee319ef72","Type":"ContainerStarted","Data":"69da522852032533fea2d35556000de4489b62164648fdfa174fa8a5edc1838e"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.215122 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"efb7e269-fe2e-45b4-949e-8f862ef94e3c","Type":"ContainerStarted","Data":"f1b235dfb69255bed6a50414ca35bafc1393f6ec5255516ce9b9df3948d0fa97"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.218122 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerStarted","Data":"33ab3761b9706827c4863af820d72c70c7768f42a172c34a7156c4fea4337fc1"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.532220 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d58855874-6hg9r" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.619769 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.620110 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" containerID="cri-o://5154f2248032d3b204066e6d3d4b29c26f050e50ddfa355693e793433a3a28e6" gracePeriod=30 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:18.620303 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" containerID="cri-o://ace2b700e3d3a46d1d7ea675ab99d8a413032344f6da2811f7ecc40159e7e333" gracePeriod=30 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.286918 4853 generic.go:334] "Generic (PLEG): container finished" podID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerID="5154f2248032d3b204066e6d3d4b29c26f050e50ddfa355693e793433a3a28e6" exitCode=143 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.288742 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerDied","Data":"5154f2248032d3b204066e6d3d4b29c26f050e50ddfa355693e793433a3a28e6"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.381343 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.38131061 podStartE2EDuration="8.38131061s" podCreationTimestamp="2025-11-22 07:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:19.356853811 +0000 UTC m=+2298.197476457" watchObservedRunningTime="2025-11-22 07:48:19.38131061 +0000 UTC m=+2298.221933236" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.618250 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.623022 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.216:8080/\": dial tcp 10.217.0.216:8080: connect: connection refused" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:19.912967 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:20.001394 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:20.001696 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="dnsmasq-dns" containerID="cri-o://00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb" gracePeriod=10 Nov 22 07:48:27 crc kubenswrapper[4853]: E1122 07:48:21.180126 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8df9438a_359b_4162_aa8d_24288f14a1fe.slice/crio-conmon-00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:21.314290 4853 generic.go:334] "Generic (PLEG): container finished" podID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerID="00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb" exitCode=0 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:21.314484 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerDied","Data":"00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:21.317848 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"efb7e269-fe2e-45b4-949e-8f862ef94e3c","Type":"ContainerStarted","Data":"f384146b4e864d288412900ef1031525bbf8b932577ba25443c091d82f70a835"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:21.319884 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:21.351935 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.351903608 podStartE2EDuration="6.351903608s" podCreationTimestamp="2025-11-22 07:48:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:21.342849745 +0000 UTC m=+2300.183472391" watchObservedRunningTime="2025-11-22 07:48:21.351903608 +0000 UTC m=+2300.192526234" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:22.814018 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": dial tcp 10.217.0.211:9311: connect: connection refused" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:22.814183 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": dial tcp 10.217.0.211:9311: connect: connection refused" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:22.967047 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-d58855874-6hg9r" podUID="1ea82711-6541-4717-8711-16a13f6ce28c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.215:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.122219 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.122322 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.346119 4853 generic.go:334] "Generic (PLEG): container finished" podID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerID="ace2b700e3d3a46d1d7ea675ab99d8a413032344f6da2811f7ecc40159e7e333" exitCode=0 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.346194 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerDied","Data":"ace2b700e3d3a46d1d7ea675ab99d8a413032344f6da2811f7ecc40159e7e333"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.350141 4853 generic.go:334] "Generic (PLEG): container finished" podID="863918c2-c760-4c96-888f-a778bcbb018b" containerID="e4b929b6b2c6ec6f2cec73d9b84f4a13113d088987c887b0762e34106a97eb58" exitCode=0 Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.350248 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerDied","Data":"e4b929b6b2c6ec6f2cec73d9b84f4a13113d088987c887b0762e34106a97eb58"} Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.406864 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.407497 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.409037 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.530945 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d58855874-6hg9r" podUID="1ea82711-6541-4717-8711-16a13f6ce28c" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.215:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:23.535889 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d58855874-6hg9r" podUID="1ea82711-6541-4717-8711-16a13f6ce28c" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.215:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:24.364632 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:24.617616 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.216:8080/\": dial tcp 10.217.0.216:8080: connect: connection refused" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:25.385918 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:27.814392 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": dial tcp 10.217.0.211:9311: connect: connection refused" Nov 22 07:48:27 crc kubenswrapper[4853]: I1122 07:48:27.814393 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78dfc746b4-t4frx" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.211:9311/healthcheck\": dial tcp 10.217.0.211:9311: connect: connection refused" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.335685 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.453163 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.453677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.453714 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.453811 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5cd9\" (UniqueName: \"kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.454017 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.454088 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb\") pod \"8df9438a-359b-4162-aa8d-24288f14a1fe\" (UID: \"8df9438a-359b-4162-aa8d-24288f14a1fe\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.474230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78dfc746b4-t4frx" event={"ID":"f8928af6-f48e-4697-a1d4-44880b78c43c","Type":"ContainerDied","Data":"7f414b005f1d4798cddbc1d19673ce8a10d6aca33db0c9bf18db6fb7b588bc75"} Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.474298 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f414b005f1d4798cddbc1d19673ce8a10d6aca33db0c9bf18db6fb7b588bc75" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.478301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.478666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerStarted","Data":"f476cc203d1cf4a00b1c84c59c310f83c401df5fbf557891736427867e5dfe98"} Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.478959 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-central-agent" containerID="cri-o://c3bec8b912b07b45c0217eff4ae1d88a1901442f6b68c6ad4e6cdf0a17a44feb" gracePeriod=30 Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.479030 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.479061 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="proxy-httpd" containerID="cri-o://f476cc203d1cf4a00b1c84c59c310f83c401df5fbf557891736427867e5dfe98" gracePeriod=30 Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.479113 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-notification-agent" containerID="cri-o://5cb43eed56a3ee85131d128b4f0c23114281f10b0f2c794124f82c80367a9968" gracePeriod=30 Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.479201 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="sg-core" containerID="cri-o://33ab3761b9706827c4863af820d72c70c7768f42a172c34a7156c4fea4337fc1" gracePeriod=30 Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.479986 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9" (OuterVolumeSpecName: "kube-api-access-b5cd9") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "kube-api-access-b5cd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.530418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerStarted","Data":"a9a9263672f9a28bf870f11897f00cd833a8712fc79992462d3aad95065a783d"} Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.552479 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.553521 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" event={"ID":"8df9438a-359b-4162-aa8d-24288f14a1fe","Type":"ContainerDied","Data":"3dfc382077f0b113c1004549e5cacf93a334aec9b4c6aef7c3cbea3441783711"} Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.553679 4853 scope.go:117] "RemoveContainer" containerID="00108b64f47311e178ddb6e7e375433d7cb453d064d5ecec370d5d6b297a11cb" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.553611 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.559293 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.559330 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5cd9\" (UniqueName: \"kubernetes.io/projected/8df9438a-359b-4162-aa8d-24288f14a1fe-kube-api-access-b5cd9\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.576448 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.578739 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.194475963 podStartE2EDuration="19.578708944s" podCreationTimestamp="2025-11-22 07:48:09 +0000 UTC" firstStartedPulling="2025-11-22 07:48:11.942904686 +0000 UTC m=+2290.783527312" lastFinishedPulling="2025-11-22 07:48:27.327137667 +0000 UTC m=+2306.167760293" observedRunningTime="2025-11-22 07:48:28.539396823 +0000 UTC m=+2307.380019459" watchObservedRunningTime="2025-11-22 07:48:28.578708944 +0000 UTC m=+2307.419331570" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.584957 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b4ffw" podStartSLOduration=10.027189028 podStartE2EDuration="28.584929861s" podCreationTimestamp="2025-11-22 07:48:00 +0000 UTC" firstStartedPulling="2025-11-22 07:48:08.816650839 +0000 UTC m=+2287.657273465" lastFinishedPulling="2025-11-22 07:48:27.374391662 +0000 UTC m=+2306.215014298" observedRunningTime="2025-11-22 07:48:28.563039291 +0000 UTC m=+2307.403661917" watchObservedRunningTime="2025-11-22 07:48:28.584929861 +0000 UTC m=+2307.425552487" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.612179 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.613259 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config" (OuterVolumeSpecName: "config") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.634310 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8df9438a-359b-4162-aa8d-24288f14a1fe" (UID: "8df9438a-359b-4162-aa8d-24288f14a1fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.661268 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle\") pod \"f8928af6-f48e-4697-a1d4-44880b78c43c\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.662079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs\") pod \"f8928af6-f48e-4697-a1d4-44880b78c43c\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.662455 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs" (OuterVolumeSpecName: "logs") pod "f8928af6-f48e-4697-a1d4-44880b78c43c" (UID: "f8928af6-f48e-4697-a1d4-44880b78c43c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.662666 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjt4k\" (UniqueName: \"kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k\") pod \"f8928af6-f48e-4697-a1d4-44880b78c43c\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.662827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom\") pod \"f8928af6-f48e-4697-a1d4-44880b78c43c\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.662872 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data\") pod \"f8928af6-f48e-4697-a1d4-44880b78c43c\" (UID: \"f8928af6-f48e-4697-a1d4-44880b78c43c\") " Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.664268 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.664297 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.664340 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.664352 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8df9438a-359b-4162-aa8d-24288f14a1fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.664362 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8928af6-f48e-4697-a1d4-44880b78c43c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.680654 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k" (OuterVolumeSpecName: "kube-api-access-zjt4k") pod "f8928af6-f48e-4697-a1d4-44880b78c43c" (UID: "f8928af6-f48e-4697-a1d4-44880b78c43c"). InnerVolumeSpecName "kube-api-access-zjt4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.700086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f8928af6-f48e-4697-a1d4-44880b78c43c" (UID: "f8928af6-f48e-4697-a1d4-44880b78c43c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.724651 4853 scope.go:117] "RemoveContainer" containerID="cb20dbbac851bb035c0d8d2e04e0757bd7fee553b0a8e2601d7a36aeb4735119" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.725309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8928af6-f48e-4697-a1d4-44880b78c43c" (UID: "f8928af6-f48e-4697-a1d4-44880b78c43c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.767319 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjt4k\" (UniqueName: \"kubernetes.io/projected/f8928af6-f48e-4697-a1d4-44880b78c43c-kube-api-access-zjt4k\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.767471 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.767529 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.769926 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data" (OuterVolumeSpecName: "config-data") pod "f8928af6-f48e-4697-a1d4-44880b78c43c" (UID: "f8928af6-f48e-4697-a1d4-44880b78c43c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.870155 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8928af6-f48e-4697-a1d4-44880b78c43c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.898787 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:48:28 crc kubenswrapper[4853]: I1122 07:48:28.912056 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-ndlll"] Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.573095 4853 generic.go:334] "Generic (PLEG): container finished" podID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerID="f476cc203d1cf4a00b1c84c59c310f83c401df5fbf557891736427867e5dfe98" exitCode=0 Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.573449 4853 generic.go:334] "Generic (PLEG): container finished" podID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerID="33ab3761b9706827c4863af820d72c70c7768f42a172c34a7156c4fea4337fc1" exitCode=2 Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.573631 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78dfc746b4-t4frx" Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.573175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerDied","Data":"f476cc203d1cf4a00b1c84c59c310f83c401df5fbf557891736427867e5dfe98"} Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.575673 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerDied","Data":"33ab3761b9706827c4863af820d72c70c7768f42a172c34a7156c4fea4337fc1"} Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.632085 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.660208 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78dfc746b4-t4frx"] Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.779157 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" path="/var/lib/kubelet/pods/8df9438a-359b-4162-aa8d-24288f14a1fe/volumes" Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.913067 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" path="/var/lib/kubelet/pods/f8928af6-f48e-4697-a1d4-44880b78c43c/volumes" Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.914467 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 07:48:29 crc kubenswrapper[4853]: I1122 07:48:29.922862 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.010945 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57c957c4ff-ndlll" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.187:5353: i/o timeout" Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.038190 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.586905 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" containerID="cri-o://2bf0cc77fb98b8b15644a8683515773bb6e68032be660da1deb3a644d7d5cf49" gracePeriod=30 Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.586919 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="probe" containerID="cri-o://af91ce2533940e8384953db542379aa666858d68f2cf600fcd358bf7ce7b0b8a" gracePeriod=30 Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.929912 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:30 crc kubenswrapper[4853]: I1122 07:48:30.931322 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:48:31 crc kubenswrapper[4853]: I1122 07:48:31.538915 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:48:31 crc kubenswrapper[4853]: I1122 07:48:31.539312 4853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:48:31 crc kubenswrapper[4853]: I1122 07:48:31.572937 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:48:31 crc kubenswrapper[4853]: I1122 07:48:31.605567 4853 generic.go:334] "Generic (PLEG): container finished" podID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerID="5cb43eed56a3ee85131d128b4f0c23114281f10b0f2c794124f82c80367a9968" exitCode=0 Nov 22 07:48:31 crc kubenswrapper[4853]: I1122 07:48:31.607081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerDied","Data":"5cb43eed56a3ee85131d128b4f0c23114281f10b0f2c794124f82c80367a9968"} Nov 22 07:48:32 crc kubenswrapper[4853]: I1122 07:48:32.019856 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b4ffw" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:32 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:32 crc kubenswrapper[4853]: > Nov 22 07:48:32 crc kubenswrapper[4853]: I1122 07:48:32.632226 4853 generic.go:334] "Generic (PLEG): container finished" podID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerID="af91ce2533940e8384953db542379aa666858d68f2cf600fcd358bf7ce7b0b8a" exitCode=0 Nov 22 07:48:32 crc kubenswrapper[4853]: I1122 07:48:32.632579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerDied","Data":"af91ce2533940e8384953db542379aa666858d68f2cf600fcd358bf7ce7b0b8a"} Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.069447 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:48:33 crc kubenswrapper[4853]: E1122 07:48:33.070258 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070277 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" Nov 22 07:48:33 crc kubenswrapper[4853]: E1122 07:48:33.070335 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="init" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070342 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="init" Nov 22 07:48:33 crc kubenswrapper[4853]: E1122 07:48:33.070363 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" containerName="heat-db-sync" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" containerName="heat-db-sync" Nov 22 07:48:33 crc kubenswrapper[4853]: E1122 07:48:33.070404 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070411 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" Nov 22 07:48:33 crc kubenswrapper[4853]: E1122 07:48:33.070423 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="dnsmasq-dns" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070429 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="dnsmasq-dns" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070778 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" containerName="heat-db-sync" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070809 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api-log" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070839 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df9438a-359b-4162-aa8d-24288f14a1fe" containerName="dnsmasq-dns" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.070853 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8928af6-f48e-4697-a1d4-44880b78c43c" containerName="barbican-api" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.071986 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.076648 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.077011 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.077166 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-htbfq" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.119586 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.201605 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.202051 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfssp\" (UniqueName: \"kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.202086 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.202126 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.248844 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.251943 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.302852 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.305583 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfssp\" (UniqueName: \"kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.305655 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.305702 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.305833 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.316619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.317817 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.332223 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.369828 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.371886 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.389237 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409016 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409138 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409223 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chcs4\" (UniqueName: \"kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409322 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.409340 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.419942 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfssp\" (UniqueName: \"kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp\") pod \"heat-engine-5b96d96555-h7jqp\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.442649 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.512766 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.512821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chcs4\" (UniqueName: \"kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.512848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zht77\" (UniqueName: \"kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.512922 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.512976 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.513002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.513088 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.513119 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.513194 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.513242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.514427 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.515375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.515927 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.516586 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.517126 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.528413 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.532975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.536428 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.552015 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.570824 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chcs4\" (UniqueName: \"kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4\") pod \"dnsmasq-dns-5b6484d7cc-qgkhv\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.628593 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zht77\" (UniqueName: \"kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629036 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629157 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629240 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629380 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629713 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.629841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mc2\" (UniqueName: \"kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.635557 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.638786 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.641158 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.674099 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zht77\" (UniqueName: \"kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77\") pod \"heat-cfnapi-7f8c69f74-g9dcb\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.712580 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.713647 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.733631 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mc2\" (UniqueName: \"kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.733772 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.733854 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.733893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.739602 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.740906 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.749879 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:33 crc kubenswrapper[4853]: I1122 07:48:33.801348 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mc2\" (UniqueName: \"kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2\") pod \"heat-api-5d747bdcd7-w5l5q\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:33.857863 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:33.908559 4853 scope.go:117] "RemoveContainer" containerID="67599a7a5981d6d4054a2c3fb6d72a75ee4653bef9ac1b3f2df7845a30f145ae" Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:34.039630 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:34.753037 4853 generic.go:334] "Generic (PLEG): container finished" podID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerID="2bf0cc77fb98b8b15644a8683515773bb6e68032be660da1deb3a644d7d5cf49" exitCode=0 Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:34.753417 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerDied","Data":"2bf0cc77fb98b8b15644a8683515773bb6e68032be660da1deb3a644d7d5cf49"} Nov 22 07:48:34 crc kubenswrapper[4853]: I1122 07:48:34.846552 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:34 crc kubenswrapper[4853]: W1122 07:48:34.922678 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddca4c1fd_7d31_4a9f_a6ad_aa037a5b8126.slice/crio-df1c28c4649743d1ed2e73bfeaa112023d5718c7f5469e5783d87f4a11a2d4a7 WatchSource:0}: Error finding container df1c28c4649743d1ed2e73bfeaa112023d5718c7f5469e5783d87f4a11a2d4a7: Status 404 returned error can't find the container with id df1c28c4649743d1ed2e73bfeaa112023d5718c7f5469e5783d87f4a11a2d4a7 Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.138860 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.254403 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxb94\" (UniqueName: \"kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.254597 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.255001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.255150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.255212 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.255251 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle\") pod \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\" (UID: \"fe59dfbf-2b13-4067-9d40-3d0d372f0f77\") " Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.255622 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.260228 4853 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.262587 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94" (OuterVolumeSpecName: "kube-api-access-bxb94") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "kube-api-access-bxb94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.269173 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts" (OuterVolumeSpecName: "scripts") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.271218 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.318638 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.363647 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxb94\" (UniqueName: \"kubernetes.io/projected/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-kube-api-access-bxb94\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.363713 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.363725 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.363733 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.398788 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data" (OuterVolumeSpecName: "config-data") pod "fe59dfbf-2b13-4067-9d40-3d0d372f0f77" (UID: "fe59dfbf-2b13-4067-9d40-3d0d372f0f77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.440183 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:48:35 crc kubenswrapper[4853]: W1122 07:48:35.451625 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf55a45f4_7912_4390_b078_7f97a864762d.slice/crio-be2684db53a5027d357ed7a6731428e712205ee9c05614aefa55411b0dfc52f7 WatchSource:0}: Error finding container be2684db53a5027d357ed7a6731428e712205ee9c05614aefa55411b0dfc52f7: Status 404 returned error can't find the container with id be2684db53a5027d357ed7a6731428e712205ee9c05614aefa55411b0dfc52f7 Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.466202 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe59dfbf-2b13-4067-9d40-3d0d372f0f77-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.467896 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.488492 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.791875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d747bdcd7-w5l5q" event={"ID":"f55a45f4-7912-4390-b078-7f97a864762d","Type":"ContainerStarted","Data":"be2684db53a5027d357ed7a6731428e712205ee9c05614aefa55411b0dfc52f7"} Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.807885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b96d96555-h7jqp" event={"ID":"9f019708-ddfa-465c-850a-7b13a20a87f2","Type":"ContainerStarted","Data":"dc78592dc7ecea4bc7e1d74ac2f7ea045e0baf6eba3818bd1b43c51935f93b34"} Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.833130 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fe59dfbf-2b13-4067-9d40-3d0d372f0f77","Type":"ContainerDied","Data":"8ccee1cf7904a68665db710a4796c04a18f0765ee61175b3fe4a268d6d7d5d6c"} Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.833206 4853 scope.go:117] "RemoveContainer" containerID="af91ce2533940e8384953db542379aa666858d68f2cf600fcd358bf7ce7b0b8a" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.833449 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.841962 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" event={"ID":"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126","Type":"ContainerStarted","Data":"df1c28c4649743d1ed2e73bfeaa112023d5718c7f5469e5783d87f4a11a2d4a7"} Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.850078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" event={"ID":"3e4475d3-9059-4761-8a99-ad8e31d01947","Type":"ContainerStarted","Data":"719506d983c33debf4630f5d37df276f5fbfd79fd69ee92fbca7ec6b802525ec"} Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.879434 4853 scope.go:117] "RemoveContainer" containerID="2bf0cc77fb98b8b15644a8683515773bb6e68032be660da1deb3a644d7d5cf49" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.923561 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.944877 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.981386 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:35 crc kubenswrapper[4853]: E1122 07:48:35.982182 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="probe" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.982212 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="probe" Nov 22 07:48:35 crc kubenswrapper[4853]: E1122 07:48:35.982232 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.982242 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.982517 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="probe" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.982535 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" containerName="cinder-scheduler" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.984413 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:35 crc kubenswrapper[4853]: I1122 07:48:35.990254 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.003907 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.111858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.112010 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.112110 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ab28049-d7dd-41b2-ae06-95c5a283266a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.112198 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.112281 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.112317 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptzf\" (UniqueName: \"kubernetes.io/projected/2ab28049-d7dd-41b2-ae06-95c5a283266a-kube-api-access-mptzf\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.214448 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.216017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ab28049-d7dd-41b2-ae06-95c5a283266a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.216201 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.216433 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.216531 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mptzf\" (UniqueName: \"kubernetes.io/projected/2ab28049-d7dd-41b2-ae06-95c5a283266a-kube-api-access-mptzf\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.216615 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.217923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ab28049-d7dd-41b2-ae06-95c5a283266a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.221454 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.223981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-config-data\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.224719 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-scripts\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.225457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ab28049-d7dd-41b2-ae06-95c5a283266a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.241341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mptzf\" (UniqueName: \"kubernetes.io/projected/2ab28049-d7dd-41b2-ae06-95c5a283266a-kube-api-access-mptzf\") pod \"cinder-scheduler-0\" (UID: \"2ab28049-d7dd-41b2-ae06-95c5a283266a\") " pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.325925 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.866640 4853 generic.go:334] "Generic (PLEG): container finished" podID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerID="69c1a1537094a06b0e1898039c33ab49ca3e2187370b00cda9e43195bdaa1cc0" exitCode=0 Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.866868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" event={"ID":"3e4475d3-9059-4761-8a99-ad8e31d01947","Type":"ContainerDied","Data":"69c1a1537094a06b0e1898039c33ab49ca3e2187370b00cda9e43195bdaa1cc0"} Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.876724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b96d96555-h7jqp" event={"ID":"9f019708-ddfa-465c-850a-7b13a20a87f2","Type":"ContainerStarted","Data":"2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917"} Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.877162 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:36 crc kubenswrapper[4853]: I1122 07:48:36.978228 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b96d96555-h7jqp" podStartSLOduration=3.978201417 podStartE2EDuration="3.978201417s" podCreationTimestamp="2025-11-22 07:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:36.922860134 +0000 UTC m=+2315.763482780" watchObservedRunningTime="2025-11-22 07:48:36.978201417 +0000 UTC m=+2315.818824043" Nov 22 07:48:37 crc kubenswrapper[4853]: I1122 07:48:37.015737 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:48:37 crc kubenswrapper[4853]: I1122 07:48:37.776705 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe59dfbf-2b13-4067-9d40-3d0d372f0f77" path="/var/lib/kubelet/pods/fe59dfbf-2b13-4067-9d40-3d0d372f0f77/volumes" Nov 22 07:48:37 crc kubenswrapper[4853]: I1122 07:48:37.894365 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ab28049-d7dd-41b2-ae06-95c5a283266a","Type":"ContainerStarted","Data":"9d953c927680bbc5781bd48cad491b5dd4654e8d93852c0e8e87ed1b997e3ea7"} Nov 22 07:48:38 crc kubenswrapper[4853]: I1122 07:48:38.924135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" event={"ID":"3e4475d3-9059-4761-8a99-ad8e31d01947","Type":"ContainerStarted","Data":"58560fa99378d76cae1cfb758a089593a15ad59a0b3f97c6f5e4bac473b2baae"} Nov 22 07:48:38 crc kubenswrapper[4853]: I1122 07:48:38.924772 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:38 crc kubenswrapper[4853]: I1122 07:48:38.952479 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" podStartSLOduration=5.952450744 podStartE2EDuration="5.952450744s" podCreationTimestamp="2025-11-22 07:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:38.945043195 +0000 UTC m=+2317.785665821" watchObservedRunningTime="2025-11-22 07:48:38.952450744 +0000 UTC m=+2317.793073370" Nov 22 07:48:39 crc kubenswrapper[4853]: I1122 07:48:39.941008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ab28049-d7dd-41b2-ae06-95c5a283266a","Type":"ContainerStarted","Data":"78007e135b8b9926c9e400494a580a5ac8e26476f50e37e25e258ac01790ad38"} Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.224213 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.219:3000/\": dial tcp 10.217.0.219:3000: connect: connection refused" Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.955886 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2ab28049-d7dd-41b2-ae06-95c5a283266a","Type":"ContainerStarted","Data":"83cefb1f142514318a2f09b40e310fb088741f76cbbb809ed562812b3946e89a"} Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.967470 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" event={"ID":"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126","Type":"ContainerStarted","Data":"fe2c5b79bc2b97b5efa701ddb69ef00a2ba5e2882b0c1b25b88622c9918d85b6"} Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.969874 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.981655 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.981624143 podStartE2EDuration="5.981624143s" podCreationTimestamp="2025-11-22 07:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:40.979864096 +0000 UTC m=+2319.820486742" watchObservedRunningTime="2025-11-22 07:48:40.981624143 +0000 UTC m=+2319.822246769" Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.984230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d747bdcd7-w5l5q" event={"ID":"f55a45f4-7912-4390-b078-7f97a864762d","Type":"ContainerStarted","Data":"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9"} Nov 22 07:48:40 crc kubenswrapper[4853]: I1122 07:48:40.985576 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:41 crc kubenswrapper[4853]: I1122 07:48:41.015971 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" podStartSLOduration=3.703800197 podStartE2EDuration="8.015935039s" podCreationTimestamp="2025-11-22 07:48:33 +0000 UTC" firstStartedPulling="2025-11-22 07:48:34.941999179 +0000 UTC m=+2313.782621805" lastFinishedPulling="2025-11-22 07:48:39.254134021 +0000 UTC m=+2318.094756647" observedRunningTime="2025-11-22 07:48:41.005975781 +0000 UTC m=+2319.846598427" watchObservedRunningTime="2025-11-22 07:48:41.015935039 +0000 UTC m=+2319.856557675" Nov 22 07:48:41 crc kubenswrapper[4853]: I1122 07:48:41.040316 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5d747bdcd7-w5l5q" podStartSLOduration=4.239602527 podStartE2EDuration="8.040283005s" podCreationTimestamp="2025-11-22 07:48:33 +0000 UTC" firstStartedPulling="2025-11-22 07:48:35.456228378 +0000 UTC m=+2314.296851004" lastFinishedPulling="2025-11-22 07:48:39.256908856 +0000 UTC m=+2318.097531482" observedRunningTime="2025-11-22 07:48:41.024413328 +0000 UTC m=+2319.865035954" watchObservedRunningTime="2025-11-22 07:48:41.040283005 +0000 UTC m=+2319.880905631" Nov 22 07:48:41 crc kubenswrapper[4853]: I1122 07:48:41.326989 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:48:42 crc kubenswrapper[4853]: I1122 07:48:42.031575 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b4ffw" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:42 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:42 crc kubenswrapper[4853]: > Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.457191 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.459657 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.469065 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.469256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.469491 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpkbd\" (UniqueName: \"kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.469545 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.511391 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.513465 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.549083 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.551843 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572027 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572188 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpkbd\" (UniqueName: \"kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572244 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572282 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572341 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572419 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jkzh\" (UniqueName: \"kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572473 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.572553 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.577247 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.591089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.593503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpkbd\" (UniqueName: \"kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.595553 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.597875 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom\") pod \"heat-engine-846454b756-2r7vp\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.606697 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.626119 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.674785 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.674876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.674963 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jkzh\" (UniqueName: \"kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.675004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.676148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.676263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.676327 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.676466 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.682034 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.683309 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.685256 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.696706 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jkzh\" (UniqueName: \"kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh\") pod \"heat-api-6d68fcb995-t8k7p\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.779255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.780609 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.780952 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.781090 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.785717 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.786553 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.787380 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.790815 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.808869 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v\") pod \"heat-cfnapi-c4c8b4969-hxtqh\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.843459 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.859933 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.902004 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.962854 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:43 crc kubenswrapper[4853]: I1122 07:48:43.963273 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="dnsmasq-dns" containerID="cri-o://ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122" gracePeriod=10 Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.678965 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.814267 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twlvc\" (UniqueName: \"kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.814348 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.815303 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.815404 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.815439 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.815474 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb\") pod \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\" (UID: \"0dd1e1e8-e796-4ad0-96de-526e8b847c61\") " Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.854479 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc" (OuterVolumeSpecName: "kube-api-access-twlvc") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "kube-api-access-twlvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.935475 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.971077 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twlvc\" (UniqueName: \"kubernetes.io/projected/0dd1e1e8-e796-4ad0-96de-526e8b847c61-kube-api-access-twlvc\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:44 crc kubenswrapper[4853]: I1122 07:48:44.971129 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.026396 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.043925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.049646 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config" (OuterVolumeSpecName: "config") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.050285 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.051032 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0dd1e1e8-e796-4ad0-96de-526e8b847c61" (UID: "0dd1e1e8-e796-4ad0-96de-526e8b847c61"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.055816 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.073551 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.073593 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.073608 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.073627 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dd1e1e8-e796-4ad0-96de-526e8b847c61-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.090137 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.147619 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" event={"ID":"8a7714a6-22d8-449a-98bd-b145c7a8d19e","Type":"ContainerStarted","Data":"5e52a87e14445f006587fcc2a9a8ffdb168435bbe7aac1d7de3e30ec66f58e94"} Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.150726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-846454b756-2r7vp" event={"ID":"4194e8cf-31be-421c-9cac-b89a8a47f004","Type":"ContainerStarted","Data":"c1f15078a54bd909b74dec593fe583b2aaad33e01d03599a9431a3d6ab268787"} Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.154076 4853 generic.go:334] "Generic (PLEG): container finished" podID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerID="ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122" exitCode=0 Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.154136 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" event={"ID":"0dd1e1e8-e796-4ad0-96de-526e8b847c61","Type":"ContainerDied","Data":"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122"} Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.154172 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" event={"ID":"0dd1e1e8-e796-4ad0-96de-526e8b847c61","Type":"ContainerDied","Data":"e286a9544c5740f7e1fa8be101d55ddba3a4dd625c6da76876436d8ccdd4b747"} Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.154836 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-c5t2f" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.161677 4853 scope.go:117] "RemoveContainer" containerID="ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.223006 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.242458 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-c5t2f"] Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.251921 4853 scope.go:117] "RemoveContainer" containerID="4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.336703 4853 scope.go:117] "RemoveContainer" containerID="ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122" Nov 22 07:48:45 crc kubenswrapper[4853]: E1122 07:48:45.340697 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122\": container with ID starting with ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122 not found: ID does not exist" containerID="ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.340773 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122"} err="failed to get container status \"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122\": rpc error: code = NotFound desc = could not find container \"ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122\": container with ID starting with ac3fc7ab9615f852a66a2955744831106f69c8a7a5577b53598a30e348490122 not found: ID does not exist" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.340810 4853 scope.go:117] "RemoveContainer" containerID="4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65" Nov 22 07:48:45 crc kubenswrapper[4853]: E1122 07:48:45.341435 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65\": container with ID starting with 4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65 not found: ID does not exist" containerID="4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.341498 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65"} err="failed to get container status \"4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65\": rpc error: code = NotFound desc = could not find container \"4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65\": container with ID starting with 4564605c1db102a2ff7e6f53055a0f9c5f44ad04c2a559690c3e8e3c42c65a65 not found: ID does not exist" Nov 22 07:48:45 crc kubenswrapper[4853]: I1122 07:48:45.766781 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" path="/var/lib/kubelet/pods/0dd1e1e8-e796-4ad0-96de-526e8b847c61/volumes" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.186019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-846454b756-2r7vp" event={"ID":"4194e8cf-31be-421c-9cac-b89a8a47f004","Type":"ContainerStarted","Data":"303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee"} Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.186392 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.192770 4853 generic.go:334] "Generic (PLEG): container finished" podID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerID="c3bec8b912b07b45c0217eff4ae1d88a1901442f6b68c6ad4e6cdf0a17a44feb" exitCode=0 Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.192874 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerDied","Data":"c3bec8b912b07b45c0217eff4ae1d88a1901442f6b68c6ad4e6cdf0a17a44feb"} Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.196078 4853 generic.go:334] "Generic (PLEG): container finished" podID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerID="dd96aa0768dec1b6a6ae586b75746c2941acc09ba8a4db9ffc5e8f3bdc5fee2c" exitCode=1 Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.196157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d68fcb995-t8k7p" event={"ID":"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb","Type":"ContainerDied","Data":"dd96aa0768dec1b6a6ae586b75746c2941acc09ba8a4db9ffc5e8f3bdc5fee2c"} Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.196196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d68fcb995-t8k7p" event={"ID":"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb","Type":"ContainerStarted","Data":"38b03e297ec0c1810bd330b357392d050a218e2aa78063a883fff5d326698ab2"} Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.197150 4853 scope.go:117] "RemoveContainer" containerID="dd96aa0768dec1b6a6ae586b75746c2941acc09ba8a4db9ffc5e8f3bdc5fee2c" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.228901 4853 generic.go:334] "Generic (PLEG): container finished" podID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerID="77a80dda18c195c967d4a21a7ebbfe9225089cb8bc706ef23e3d8cd93b90cf48" exitCode=1 Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.228971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" event={"ID":"8a7714a6-22d8-449a-98bd-b145c7a8d19e","Type":"ContainerDied","Data":"77a80dda18c195c967d4a21a7ebbfe9225089cb8bc706ef23e3d8cd93b90cf48"} Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.229918 4853 scope.go:117] "RemoveContainer" containerID="77a80dda18c195c967d4a21a7ebbfe9225089cb8bc706ef23e3d8cd93b90cf48" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.268884 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-846454b756-2r7vp" podStartSLOduration=3.268856375 podStartE2EDuration="3.268856375s" podCreationTimestamp="2025-11-22 07:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:46.210399409 +0000 UTC m=+2325.051022035" watchObservedRunningTime="2025-11-22 07:48:46.268856375 +0000 UTC m=+2325.109479001" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.706122 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.842952 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843024 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhpx\" (UniqueName: \"kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843052 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843108 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843207 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843469 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843562 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.843605 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle\") pod \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\" (UID: \"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9\") " Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.844686 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.869215 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.894959 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts" (OuterVolumeSpecName: "scripts") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.895493 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx" (OuterVolumeSpecName: "kube-api-access-gnhpx") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "kube-api-access-gnhpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.954192 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.954229 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.954241 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhpx\" (UniqueName: \"kubernetes.io/projected/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-kube-api-access-gnhpx\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:46 crc kubenswrapper[4853]: I1122 07:48:46.954254 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.058034 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.087978 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.119058 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.126234 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data" (OuterVolumeSpecName: "config-data") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.164073 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.164510 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.164526 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.174460 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" (UID: "fd18e0e6-0aa8-480b-b279-ac3fa277c8b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.248428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d68fcb995-t8k7p" event={"ID":"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb","Type":"ContainerStarted","Data":"8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1"} Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.248586 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.258247 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" event={"ID":"8a7714a6-22d8-449a-98bd-b145c7a8d19e","Type":"ContainerStarted","Data":"387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364"} Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.259719 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.268723 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.273220 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.279770 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd18e0e6-0aa8-480b-b279-ac3fa277c8b9","Type":"ContainerDied","Data":"363c01c152e347ddb2f316c9a8dd3bc81ba22bce92f223743bd3b59eeb167bf7"} Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.279883 4853 scope.go:117] "RemoveContainer" containerID="f476cc203d1cf4a00b1c84c59c310f83c401df5fbf557891736427867e5dfe98" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.303697 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d68fcb995-t8k7p" podStartSLOduration=4.303672706 podStartE2EDuration="4.303672706s" podCreationTimestamp="2025-11-22 07:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:47.272347371 +0000 UTC m=+2326.112969997" watchObservedRunningTime="2025-11-22 07:48:47.303672706 +0000 UTC m=+2326.144295332" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.368678 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" podStartSLOduration=4.368641078 podStartE2EDuration="4.368641078s" podCreationTimestamp="2025-11-22 07:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:47.301047065 +0000 UTC m=+2326.141669691" watchObservedRunningTime="2025-11-22 07:48:47.368641078 +0000 UTC m=+2326.209263704" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.385848 4853 scope.go:117] "RemoveContainer" containerID="33ab3761b9706827c4863af820d72c70c7768f42a172c34a7156c4fea4337fc1" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.437142 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.438126 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5d747bdcd7-w5l5q" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" containerID="cri-o://294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9" gracePeriod=60 Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.482132 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5d747bdcd7-w5l5q" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.225:8004/healthcheck\": EOF" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.517887 4853 scope.go:117] "RemoveContainer" containerID="5cb43eed56a3ee85131d128b4f0c23114281f10b0f2c794124f82c80367a9968" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.549775 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.571560 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.571816 4853 scope.go:117] "RemoveContainer" containerID="c3bec8b912b07b45c0217eff4ae1d88a1901442f6b68c6ad4e6cdf0a17a44feb" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.600630 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.600931 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerName="heat-cfnapi" containerID="cri-o://fe2c5b79bc2b97b5efa701ddb69ef00a2ba5e2882b0c1b25b88622c9918d85b6" gracePeriod=60 Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.612881 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613697 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="proxy-httpd" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613717 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="proxy-httpd" Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613732 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="dnsmasq-dns" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613740 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="dnsmasq-dns" Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613779 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="sg-core" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613795 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="sg-core" Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613856 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-central-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613870 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-central-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613886 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="init" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613895 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="init" Nov 22 07:48:47 crc kubenswrapper[4853]: E1122 07:48:47.613909 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-notification-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.613917 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-notification-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.614211 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-notification-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.614241 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="proxy-httpd" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.614260 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="sg-core" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.614278 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" containerName="ceilometer-central-agent" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.614290 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd1e1e8-e796-4ad0-96de-526e8b847c61" containerName="dnsmasq-dns" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.617481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.620788 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.621569 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.622917 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.688145 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.688770 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.689623 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.689718 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690097 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpxfz\" (UniqueName: \"kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690161 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690552 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.690835 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.694654 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.694782 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.736706 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.770626 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd18e0e6-0aa8-480b-b279-ac3fa277c8b9" path="/var/lib/kubelet/pods/fd18e0e6-0aa8-480b-b279-ac3fa277c8b9/volumes" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.772158 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.779140 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.779289 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.783949 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.784234 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.785848 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.794269 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.794424 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.794537 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.794575 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.794971 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795012 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795082 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795124 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795192 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795238 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxfz\" (UniqueName: \"kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795322 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795363 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.795397 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhzrq\" (UniqueName: \"kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.796373 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.796793 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.809275 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.813669 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.814547 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.815143 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.816141 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.821635 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxfz\" (UniqueName: \"kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz\") pod \"ceilometer-0\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " pod="openstack/ceilometer-0" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.895370 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.224:8000/healthcheck\": read tcp 10.217.0.2:35276->10.217.0.224:8000: read: connection reset by peer" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897490 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897581 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897677 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897712 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhzrq\" (UniqueName: \"kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.897921 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.898022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnw84\" (UniqueName: \"kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.904795 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.905049 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.905162 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.905283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.910985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.916228 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.916300 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.919527 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.922768 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:47 crc kubenswrapper[4853]: I1122 07:48:47.931010 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhzrq\" (UniqueName: \"kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq\") pod \"heat-api-57b59697c4-2frrp\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008179 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008285 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008329 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnw84\" (UniqueName: \"kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008442 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.008501 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.014403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.015563 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.018108 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.025349 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.025412 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.038732 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnw84\" (UniqueName: \"kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84\") pod \"heat-cfnapi-94775ccf-w92qr\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.086550 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.113176 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.132284 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.305099 4853 generic.go:334] "Generic (PLEG): container finished" podID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerID="8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1" exitCode=1 Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.305346 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d68fcb995-t8k7p" event={"ID":"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb","Type":"ContainerDied","Data":"8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1"} Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.305586 4853 scope.go:117] "RemoveContainer" containerID="dd96aa0768dec1b6a6ae586b75746c2941acc09ba8a4db9ffc5e8f3bdc5fee2c" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.306443 4853 scope.go:117] "RemoveContainer" containerID="8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1" Nov 22 07:48:48 crc kubenswrapper[4853]: E1122 07:48:48.306778 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6d68fcb995-t8k7p_openstack(e5c88004-5a16-4f0f-bb38-08d44ee6e0fb)\"" pod="openstack/heat-api-6d68fcb995-t8k7p" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.313309 4853 generic.go:334] "Generic (PLEG): container finished" podID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerID="fe2c5b79bc2b97b5efa701ddb69ef00a2ba5e2882b0c1b25b88622c9918d85b6" exitCode=0 Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.313382 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" event={"ID":"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126","Type":"ContainerDied","Data":"fe2c5b79bc2b97b5efa701ddb69ef00a2ba5e2882b0c1b25b88622c9918d85b6"} Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.367479 4853 generic.go:334] "Generic (PLEG): container finished" podID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerID="387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364" exitCode=1 Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.367558 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" event={"ID":"8a7714a6-22d8-449a-98bd-b145c7a8d19e","Type":"ContainerDied","Data":"387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364"} Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.368602 4853 scope.go:117] "RemoveContainer" containerID="387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364" Nov 22 07:48:48 crc kubenswrapper[4853]: E1122 07:48:48.369042 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-c4c8b4969-hxtqh_openstack(8a7714a6-22d8-449a-98bd-b145c7a8d19e)\"" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.410020 4853 scope.go:117] "RemoveContainer" containerID="77a80dda18c195c967d4a21a7ebbfe9225089cb8bc706ef23e3d8cd93b90cf48" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.455087 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.641599 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data\") pod \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.642236 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle\") pod \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.642362 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zht77\" (UniqueName: \"kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77\") pod \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.642589 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom\") pod \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\" (UID: \"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126\") " Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.651368 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" (UID: "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.654787 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77" (OuterVolumeSpecName: "kube-api-access-zht77") pod "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" (UID: "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126"). InnerVolumeSpecName "kube-api-access-zht77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.700624 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" (UID: "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.747050 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.747133 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zht77\" (UniqueName: \"kubernetes.io/projected/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-kube-api-access-zht77\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.747148 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.752548 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data" (OuterVolumeSpecName: "config-data") pod "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" (UID: "dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.844861 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.851099 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.868094 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.879553 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:48:48 crc kubenswrapper[4853]: I1122 07:48:48.902911 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.075397 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.403370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94775ccf-w92qr" event={"ID":"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a","Type":"ContainerStarted","Data":"a0a2240efd83bffb731360afe41c6667be021904b9096fcf4ba2d21449c5b662"} Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.403852 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94775ccf-w92qr" event={"ID":"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a","Type":"ContainerStarted","Data":"c13ae05eb09b76a3627bcd988f1ac698bebb16365a5f90fb06bde132c7ad8ab0"} Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.403961 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.439331 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-94775ccf-w92qr" podStartSLOduration=2.439263305 podStartE2EDuration="2.439263305s" podCreationTimestamp="2025-11-22 07:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:49.42834201 +0000 UTC m=+2328.268964636" watchObservedRunningTime="2025-11-22 07:48:49.439263305 +0000 UTC m=+2328.279885931" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.442155 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.442155 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7f8c69f74-g9dcb" event={"ID":"dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126","Type":"ContainerDied","Data":"df1c28c4649743d1ed2e73bfeaa112023d5718c7f5469e5783d87f4a11a2d4a7"} Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.442248 4853 scope.go:117] "RemoveContainer" containerID="fe2c5b79bc2b97b5efa701ddb69ef00a2ba5e2882b0c1b25b88622c9918d85b6" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.446147 4853 scope.go:117] "RemoveContainer" containerID="8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1" Nov 22 07:48:49 crc kubenswrapper[4853]: E1122 07:48:49.446974 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6d68fcb995-t8k7p_openstack(e5c88004-5a16-4f0f-bb38-08d44ee6e0fb)\"" pod="openstack/heat-api-6d68fcb995-t8k7p" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.457049 4853 scope.go:117] "RemoveContainer" containerID="387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364" Nov 22 07:48:49 crc kubenswrapper[4853]: E1122 07:48:49.457690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-c4c8b4969-hxtqh_openstack(8a7714a6-22d8-449a-98bd-b145c7a8d19e)\"" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.462652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b59697c4-2frrp" event={"ID":"5a4daab2-d15b-4492-9eea-05a2f6b753ef","Type":"ContainerStarted","Data":"13fe258265ef315905c6a6d3db3461379141e7b6166ccca2a2182f6017bab6cc"} Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.470972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerStarted","Data":"07e78a069b33a6406e3ec206fdd5b715188b917725251b47508b6122b91d7c55"} Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.534361 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.551074 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7f8c69f74-g9dcb"] Nov 22 07:48:49 crc kubenswrapper[4853]: I1122 07:48:49.771693 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" path="/var/lib/kubelet/pods/dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126/volumes" Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.537428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b59697c4-2frrp" event={"ID":"5a4daab2-d15b-4492-9eea-05a2f6b753ef","Type":"ContainerStarted","Data":"77895d132867c7e5e6a8436ef2372ee3f5927332df11fad5474e7068d2d9768e"} Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.545470 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.555827 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerStarted","Data":"cce5304a03650e16db88c732986fa91fa2ab52e3e5184aca9b2cc9e3e8bb8153"} Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.556082 4853 scope.go:117] "RemoveContainer" containerID="8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1" Nov 22 07:48:50 crc kubenswrapper[4853]: E1122 07:48:50.556476 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6d68fcb995-t8k7p_openstack(e5c88004-5a16-4f0f-bb38-08d44ee6e0fb)\"" pod="openstack/heat-api-6d68fcb995-t8k7p" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.557602 4853 scope.go:117] "RemoveContainer" containerID="387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364" Nov 22 07:48:50 crc kubenswrapper[4853]: E1122 07:48:50.557989 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-c4c8b4969-hxtqh_openstack(8a7714a6-22d8-449a-98bd-b145c7a8d19e)\"" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" Nov 22 07:48:50 crc kubenswrapper[4853]: I1122 07:48:50.586932 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-57b59697c4-2frrp" podStartSLOduration=3.586905468 podStartE2EDuration="3.586905468s" podCreationTimestamp="2025-11-22 07:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:48:50.580911266 +0000 UTC m=+2329.421533892" watchObservedRunningTime="2025-11-22 07:48:50.586905468 +0000 UTC m=+2329.427528094" Nov 22 07:48:51 crc kubenswrapper[4853]: I1122 07:48:51.575266 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerStarted","Data":"930086d9c8af0a468a3d96096d238105a075f905c3d11cf002d75861f6617858"} Nov 22 07:48:51 crc kubenswrapper[4853]: I1122 07:48:51.989254 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b4ffw" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:51 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:51 crc kubenswrapper[4853]: > Nov 22 07:48:52 crc kubenswrapper[4853]: I1122 07:48:52.593241 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerStarted","Data":"87e45c20392e9c98b991021a08ada968d3ebf76981ce5a66372b73c6b9f4a522"} Nov 22 07:48:53 crc kubenswrapper[4853]: I1122 07:48:53.610401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerStarted","Data":"a1bfe41f63aa3a5c901b7fcef7909d361777e0244345855b57e97542dd394d7d"} Nov 22 07:48:53 crc kubenswrapper[4853]: I1122 07:48:53.611182 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:48:53 crc kubenswrapper[4853]: I1122 07:48:53.641515 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.706352 podStartE2EDuration="6.641490024s" podCreationTimestamp="2025-11-22 07:48:47 +0000 UTC" firstStartedPulling="2025-11-22 07:48:48.879135678 +0000 UTC m=+2327.719758304" lastFinishedPulling="2025-11-22 07:48:52.814273692 +0000 UTC m=+2331.654896328" observedRunningTime="2025-11-22 07:48:53.637029373 +0000 UTC m=+2332.477652029" watchObservedRunningTime="2025-11-22 07:48:53.641490024 +0000 UTC m=+2332.482112660" Nov 22 07:48:53 crc kubenswrapper[4853]: I1122 07:48:53.790108 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:48:53 crc kubenswrapper[4853]: I1122 07:48:53.882355 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5d747bdcd7-w5l5q" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.225:8004/healthcheck\": read tcp 10.217.0.2:49460->10.217.0.225:8004: read: connection reset by peer" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.040491 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-5d747bdcd7-w5l5q" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.225:8004/healthcheck\": dial tcp 10.217.0.225:8004: connect: connection refused" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.560408 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.644471 4853 generic.go:334] "Generic (PLEG): container finished" podID="f55a45f4-7912-4390-b078-7f97a864762d" containerID="294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9" exitCode=0 Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.644558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5d747bdcd7-w5l5q" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.644590 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d747bdcd7-w5l5q" event={"ID":"f55a45f4-7912-4390-b078-7f97a864762d","Type":"ContainerDied","Data":"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9"} Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.645589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5d747bdcd7-w5l5q" event={"ID":"f55a45f4-7912-4390-b078-7f97a864762d","Type":"ContainerDied","Data":"be2684db53a5027d357ed7a6731428e712205ee9c05614aefa55411b0dfc52f7"} Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.645695 4853 scope.go:117] "RemoveContainer" containerID="294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.661150 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data\") pod \"f55a45f4-7912-4390-b078-7f97a864762d\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.661263 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle\") pod \"f55a45f4-7912-4390-b078-7f97a864762d\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.662555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9mc2\" (UniqueName: \"kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2\") pod \"f55a45f4-7912-4390-b078-7f97a864762d\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.682491 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2" (OuterVolumeSpecName: "kube-api-access-k9mc2") pod "f55a45f4-7912-4390-b078-7f97a864762d" (UID: "f55a45f4-7912-4390-b078-7f97a864762d"). InnerVolumeSpecName "kube-api-access-k9mc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.705319 4853 scope.go:117] "RemoveContainer" containerID="294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9" Nov 22 07:48:54 crc kubenswrapper[4853]: E1122 07:48:54.714151 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9\": container with ID starting with 294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9 not found: ID does not exist" containerID="294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.714247 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9"} err="failed to get container status \"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9\": rpc error: code = NotFound desc = could not find container \"294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9\": container with ID starting with 294afdbff8ed4a3f4a4afc09aba032fa21d5cf6b51ce8749a8fb3442d53e90d9 not found: ID does not exist" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.738551 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f55a45f4-7912-4390-b078-7f97a864762d" (UID: "f55a45f4-7912-4390-b078-7f97a864762d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.764922 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom\") pod \"f55a45f4-7912-4390-b078-7f97a864762d\" (UID: \"f55a45f4-7912-4390-b078-7f97a864762d\") " Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.765447 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.765472 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9mc2\" (UniqueName: \"kubernetes.io/projected/f55a45f4-7912-4390-b078-7f97a864762d-kube-api-access-k9mc2\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.775541 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f55a45f4-7912-4390-b078-7f97a864762d" (UID: "f55a45f4-7912-4390-b078-7f97a864762d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.868204 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data" (OuterVolumeSpecName: "config-data") pod "f55a45f4-7912-4390-b078-7f97a864762d" (UID: "f55a45f4-7912-4390-b078-7f97a864762d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.869222 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:54 crc kubenswrapper[4853]: I1122 07:48:54.869249 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f55a45f4-7912-4390-b078-7f97a864762d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:55 crc kubenswrapper[4853]: I1122 07:48:55.069213 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:55 crc kubenswrapper[4853]: I1122 07:48:55.100851 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5d747bdcd7-w5l5q"] Nov 22 07:48:55 crc kubenswrapper[4853]: I1122 07:48:55.762727 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f55a45f4-7912-4390-b078-7f97a864762d" path="/var/lib/kubelet/pods/f55a45f4-7912-4390-b078-7f97a864762d/volumes" Nov 22 07:48:58 crc kubenswrapper[4853]: I1122 07:48:58.700393 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7bb7e8f-c36e-4027-b953-384bff85680b" containerID="a00fb8d47d57f5167eb191ed1e61f773c885900c90935674ac55ac783b8af83d" exitCode=0 Nov 22 07:48:58 crc kubenswrapper[4853]: I1122 07:48:58.700478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" event={"ID":"c7bb7e8f-c36e-4027-b953-384bff85680b","Type":"ContainerDied","Data":"a00fb8d47d57f5167eb191ed1e61f773c885900c90935674ac55ac783b8af83d"} Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.304956 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.451330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data\") pod \"c7bb7e8f-c36e-4027-b953-384bff85680b\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.452115 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle\") pod \"c7bb7e8f-c36e-4027-b953-384bff85680b\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.452203 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzrbd\" (UniqueName: \"kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd\") pod \"c7bb7e8f-c36e-4027-b953-384bff85680b\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.452272 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts\") pod \"c7bb7e8f-c36e-4027-b953-384bff85680b\" (UID: \"c7bb7e8f-c36e-4027-b953-384bff85680b\") " Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.468172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd" (OuterVolumeSpecName: "kube-api-access-tzrbd") pod "c7bb7e8f-c36e-4027-b953-384bff85680b" (UID: "c7bb7e8f-c36e-4027-b953-384bff85680b"). InnerVolumeSpecName "kube-api-access-tzrbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.470271 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts" (OuterVolumeSpecName: "scripts") pod "c7bb7e8f-c36e-4027-b953-384bff85680b" (UID: "c7bb7e8f-c36e-4027-b953-384bff85680b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.502002 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data" (OuterVolumeSpecName: "config-data") pod "c7bb7e8f-c36e-4027-b953-384bff85680b" (UID: "c7bb7e8f-c36e-4027-b953-384bff85680b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.506174 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7bb7e8f-c36e-4027-b953-384bff85680b" (UID: "c7bb7e8f-c36e-4027-b953-384bff85680b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.555816 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.555870 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.555887 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzrbd\" (UniqueName: \"kubernetes.io/projected/c7bb7e8f-c36e-4027-b953-384bff85680b-kube-api-access-tzrbd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.555902 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7bb7e8f-c36e-4027-b953-384bff85680b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.739846 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" event={"ID":"c7bb7e8f-c36e-4027-b953-384bff85680b","Type":"ContainerDied","Data":"c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5"} Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.739919 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c28cff82e8010169a4d0376216a3c1fa1484dcf808c6010639cc3a6910e8b2d5" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.740059 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-c5tjs" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.983113 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:00 crc kubenswrapper[4853]: E1122 07:49:00.983918 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerName="heat-cfnapi" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.983945 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerName="heat-cfnapi" Nov 22 07:49:00 crc kubenswrapper[4853]: E1122 07:49:00.983982 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.983992 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" Nov 22 07:49:00 crc kubenswrapper[4853]: E1122 07:49:00.984032 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" containerName="nova-cell0-conductor-db-sync" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.984042 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" containerName="nova-cell0-conductor-db-sync" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.984291 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" containerName="nova-cell0-conductor-db-sync" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.984316 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55a45f4-7912-4390-b078-7f97a864762d" containerName="heat-api" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.984338 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca4c1fd-7d31-4a9f-a6ad-aa037a5b8126" containerName="heat-cfnapi" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.985418 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.989800 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-qtghl" Nov 22 07:49:00 crc kubenswrapper[4853]: I1122 07:49:00.994386 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.006168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.020207 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrnb\" (UniqueName: \"kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.020312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.020357 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.122854 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrrnb\" (UniqueName: \"kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.123053 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.123135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.129617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.130782 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.166503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrrnb\" (UniqueName: \"kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb\") pod \"nova-cell0-conductor-0\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.341856 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.850796 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.950383 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:49:01 crc kubenswrapper[4853]: I1122 07:49:01.978618 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.030121 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-b4ffw" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" probeResult="failure" output=< Nov 22 07:49:02 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:49:02 crc kubenswrapper[4853]: > Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.090588 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.115062 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.769654 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.792546 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.809138 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data\") pod \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.809214 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle\") pod \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.810549 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom\") pod \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.810650 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v\") pod \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\" (UID: \"8a7714a6-22d8-449a-98bd-b145c7a8d19e\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.820260 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d68fcb995-t8k7p" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.820457 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v" (OuterVolumeSpecName: "kube-api-access-pjs4v") pod "8a7714a6-22d8-449a-98bd-b145c7a8d19e" (UID: "8a7714a6-22d8-449a-98bd-b145c7a8d19e"). InnerVolumeSpecName "kube-api-access-pjs4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.820558 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d68fcb995-t8k7p" event={"ID":"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb","Type":"ContainerDied","Data":"38b03e297ec0c1810bd330b357392d050a218e2aa78063a883fff5d326698ab2"} Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.820625 4853 scope.go:117] "RemoveContainer" containerID="8583b467014d7bd55f24f7f6810f51d002f0f5d5a46067966bccedebdbbf71b1" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.834928 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a7714a6-22d8-449a-98bd-b145c7a8d19e" (UID: "8a7714a6-22d8-449a-98bd-b145c7a8d19e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.841324 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" event={"ID":"8a7714a6-22d8-449a-98bd-b145c7a8d19e","Type":"ContainerDied","Data":"5e52a87e14445f006587fcc2a9a8ffdb168435bbe7aac1d7de3e30ec66f58e94"} Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.841666 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c4c8b4969-hxtqh" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.851134 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df14bfb5-652f-4e60-a709-e3ed7348d00a","Type":"ContainerStarted","Data":"dc710994837cfb60b29ae2d2d75f810962975614729cd3fc2ed54dd1067f34ef"} Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.851216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df14bfb5-652f-4e60-a709-e3ed7348d00a","Type":"ContainerStarted","Data":"9d4664a2cf9511eb48195a674b303a4560f45846df8891a5b73a804170cbb0af"} Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.855204 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.877766 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.877721176 podStartE2EDuration="2.877721176s" podCreationTimestamp="2025-11-22 07:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:02.876264227 +0000 UTC m=+2341.716886863" watchObservedRunningTime="2025-11-22 07:49:02.877721176 +0000 UTC m=+2341.718343802" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.885556 4853 scope.go:117] "RemoveContainer" containerID="387a8e22dca8f29137324282e9c8d66d6cff5fd42b8778a48a636a4a72083364" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.914875 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom\") pod \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.915190 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle\") pod \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.915250 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data\") pod \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.915330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jkzh\" (UniqueName: \"kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh\") pod \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\" (UID: \"e5c88004-5a16-4f0f-bb38-08d44ee6e0fb\") " Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.916331 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.916360 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjs4v\" (UniqueName: \"kubernetes.io/projected/8a7714a6-22d8-449a-98bd-b145c7a8d19e-kube-api-access-pjs4v\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.922078 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a7714a6-22d8-449a-98bd-b145c7a8d19e" (UID: "8a7714a6-22d8-449a-98bd-b145c7a8d19e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.937632 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh" (OuterVolumeSpecName: "kube-api-access-6jkzh") pod "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" (UID: "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb"). InnerVolumeSpecName "kube-api-access-6jkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.937940 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" (UID: "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.952393 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" (UID: "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:02 crc kubenswrapper[4853]: I1122 07:49:02.982351 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data" (OuterVolumeSpecName: "config-data") pod "8a7714a6-22d8-449a-98bd-b145c7a8d19e" (UID: "8a7714a6-22d8-449a-98bd-b145c7a8d19e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.024650 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data" (OuterVolumeSpecName: "config-data") pod "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" (UID: "e5c88004-5a16-4f0f-bb38-08d44ee6e0fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.025444 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.025628 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.025812 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jkzh\" (UniqueName: \"kubernetes.io/projected/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-kube-api-access-6jkzh\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.025882 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.025955 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7714a6-22d8-449a-98bd-b145c7a8d19e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.026008 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.475137 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.534649 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d68fcb995-t8k7p"] Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.569815 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.581926 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-c4c8b4969-hxtqh"] Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.775966 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" path="/var/lib/kubelet/pods/8a7714a6-22d8-449a-98bd-b145c7a8d19e/volumes" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.776863 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" path="/var/lib/kubelet/pods/e5c88004-5a16-4f0f-bb38-08d44ee6e0fb/volumes" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.828534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.904166 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:49:03 crc kubenswrapper[4853]: I1122 07:49:03.904467 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b96d96555-h7jqp" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" containerID="cri-o://2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" gracePeriod=60 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.689167 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.690137 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-central-agent" containerID="cri-o://cce5304a03650e16db88c732986fa91fa2ab52e3e5184aca9b2cc9e3e8bb8153" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.690845 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-notification-agent" containerID="cri-o://930086d9c8af0a468a3d96096d238105a075f905c3d11cf002d75861f6617858" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.690857 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="sg-core" containerID="cri-o://87e45c20392e9c98b991021a08ada968d3ebf76981ce5a66372b73c6b9f4a522" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.690967 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="proxy-httpd" containerID="cri-o://a1bfe41f63aa3a5c901b7fcef7909d361777e0244345855b57e97542dd394d7d" gracePeriod=30 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.728072 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.943036 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerID="87e45c20392e9c98b991021a08ada968d3ebf76981ce5a66372b73c6b9f4a522" exitCode=2 Nov 22 07:49:04 crc kubenswrapper[4853]: I1122 07:49:04.944801 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerDied","Data":"87e45c20392e9c98b991021a08ada968d3ebf76981ce5a66372b73c6b9f4a522"} Nov 22 07:49:05 crc kubenswrapper[4853]: I1122 07:49:05.960487 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerID="cce5304a03650e16db88c732986fa91fa2ab52e3e5184aca9b2cc9e3e8bb8153" exitCode=0 Nov 22 07:49:05 crc kubenswrapper[4853]: I1122 07:49:05.960579 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerDied","Data":"cce5304a03650e16db88c732986fa91fa2ab52e3e5184aca9b2cc9e3e8bb8153"} Nov 22 07:49:06 crc kubenswrapper[4853]: I1122 07:49:06.995844 4853 generic.go:334] "Generic (PLEG): container finished" podID="297f89ac-14c3-4918-bd7e-776cc229298c" containerID="a4fce85f953a48f363537a181cfb1a4384fb876c100e2e32d58fa35ad92b866b" exitCode=0 Nov 22 07:49:06 crc kubenswrapper[4853]: I1122 07:49:06.995907 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4htd6" event={"ID":"297f89ac-14c3-4918-bd7e-776cc229298c","Type":"ContainerDied","Data":"a4fce85f953a48f363537a181cfb1a4384fb876c100e2e32d58fa35ad92b866b"} Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.006094 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerID="a1bfe41f63aa3a5c901b7fcef7909d361777e0244345855b57e97542dd394d7d" exitCode=0 Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.006129 4853 generic.go:334] "Generic (PLEG): container finished" podID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerID="930086d9c8af0a468a3d96096d238105a075f905c3d11cf002d75861f6617858" exitCode=0 Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.006156 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerDied","Data":"a1bfe41f63aa3a5c901b7fcef7909d361777e0244345855b57e97542dd394d7d"} Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.006191 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerDied","Data":"930086d9c8af0a468a3d96096d238105a075f905c3d11cf002d75861f6617858"} Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.385179 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.561542 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563086 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563163 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpxfz\" (UniqueName: \"kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563225 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563320 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563440 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563533 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.563682 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml\") pod \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\" (UID: \"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9\") " Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.569668 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts" (OuterVolumeSpecName: "scripts") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.570532 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.574800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.583202 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz" (OuterVolumeSpecName: "kube-api-access-cpxfz") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "kube-api-access-cpxfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.621123 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.669671 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.669720 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.669739 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.669772 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpxfz\" (UniqueName: \"kubernetes.io/projected/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-kube-api-access-cpxfz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.669785 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.677094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.708861 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.772403 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.772438 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.798079 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data" (OuterVolumeSpecName: "config-data") pod "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" (UID: "d2c2ec6b-cfbc-4027-84e7-7545a1414ec9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:07 crc kubenswrapper[4853]: I1122 07:49:07.875007 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.022452 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c2ec6b-cfbc-4027-84e7-7545a1414ec9","Type":"ContainerDied","Data":"07e78a069b33a6406e3ec206fdd5b715188b917725251b47508b6122b91d7c55"} Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.022492 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.022539 4853 scope.go:117] "RemoveContainer" containerID="a1bfe41f63aa3a5c901b7fcef7909d361777e0244345855b57e97542dd394d7d" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.075179 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.096105 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.102104 4853 scope.go:117] "RemoveContainer" containerID="87e45c20392e9c98b991021a08ada968d3ebf76981ce5a66372b73c6b9f4a522" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.113663 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116033 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.116185 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116265 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-central-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.116328 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-central-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116528 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.116603 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116686 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="sg-core" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.116771 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="sg-core" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116848 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-notification-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.116921 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-notification-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.116993 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="proxy-httpd" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.117051 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="proxy-httpd" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.117727 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.117854 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-notification-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.117943 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="proxy-httpd" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118031 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118138 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118227 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="ceilometer-central-agent" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118305 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118386 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" containerName="sg-core" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.118807 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.118899 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c88004-5a16-4f0f-bb38-08d44ee6e0fb" containerName="heat-api" Nov 22 07:49:08 crc kubenswrapper[4853]: E1122 07:49:08.118992 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.119060 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7714a6-22d8-449a-98bd-b145c7a8d19e" containerName="heat-cfnapi" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.122984 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.137638 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.137972 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.137881 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.150482 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.165774 4853 scope.go:117] "RemoveContainer" containerID="930086d9c8af0a468a3d96096d238105a075f905c3d11cf002d75861f6617858" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.211110 4853 scope.go:117] "RemoveContainer" containerID="cce5304a03650e16db88c732986fa91fa2ab52e3e5184aca9b2cc9e3e8bb8153" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291728 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291812 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv2cj\" (UniqueName: \"kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291863 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291907 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.291991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.292026 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.292045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.397388 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.397516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.397666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.397775 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.397800 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.398063 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.398106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv2cj\" (UniqueName: \"kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.398220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.400507 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.400883 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.410526 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.411447 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.411951 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.415704 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.432591 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv2cj\" (UniqueName: \"kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.442608 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.472398 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.485058 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4htd6" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.504583 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhz5n\" (UniqueName: \"kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n\") pod \"297f89ac-14c3-4918-bd7e-776cc229298c\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.504699 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config\") pod \"297f89ac-14c3-4918-bd7e-776cc229298c\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.504733 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle\") pod \"297f89ac-14c3-4918-bd7e-776cc229298c\" (UID: \"297f89ac-14c3-4918-bd7e-776cc229298c\") " Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.513979 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n" (OuterVolumeSpecName: "kube-api-access-bhz5n") pod "297f89ac-14c3-4918-bd7e-776cc229298c" (UID: "297f89ac-14c3-4918-bd7e-776cc229298c"). InnerVolumeSpecName "kube-api-access-bhz5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.593530 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "297f89ac-14c3-4918-bd7e-776cc229298c" (UID: "297f89ac-14c3-4918-bd7e-776cc229298c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.611374 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhz5n\" (UniqueName: \"kubernetes.io/projected/297f89ac-14c3-4918-bd7e-776cc229298c-kube-api-access-bhz5n\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.611418 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.612277 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config" (OuterVolumeSpecName: "config") pod "297f89ac-14c3-4918-bd7e-776cc229298c" (UID: "297f89ac-14c3-4918-bd7e-776cc229298c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:08 crc kubenswrapper[4853]: I1122 07:49:08.719875 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/297f89ac-14c3-4918-bd7e-776cc229298c-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.121088 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4htd6" event={"ID":"297f89ac-14c3-4918-bd7e-776cc229298c","Type":"ContainerDied","Data":"b226a6f48db6a0762dd8b953a6cf55169576efbe3cae00cd29b090b25d53e196"} Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.121457 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b226a6f48db6a0762dd8b953a6cf55169576efbe3cae00cd29b090b25d53e196" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.121558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4htd6" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.300793 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.508981 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:09 crc kubenswrapper[4853]: E1122 07:49:09.510299 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f89ac-14c3-4918-bd7e-776cc229298c" containerName="neutron-db-sync" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.510325 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f89ac-14c3-4918-bd7e-776cc229298c" containerName="neutron-db-sync" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.510662 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f89ac-14c3-4918-bd7e-776cc229298c" containerName="neutron-db-sync" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.516334 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.546828 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.593290 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.593462 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzzr\" (UniqueName: \"kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.593504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.594780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.594904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.595447 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.634705 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.651569 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.675513 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.680678 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-65bpr" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.680790 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.680857 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.709383 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.725939 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727019 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727186 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727456 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzzr\" (UniqueName: \"kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727507 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727573 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727623 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snwq\" (UniqueName: \"kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.727933 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.728724 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.729260 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.729555 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.729702 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.731213 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.736346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.776860 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c2ec6b-cfbc-4027-84e7-7545a1414ec9" path="/var/lib/kubelet/pods/d2c2ec6b-cfbc-4027-84e7-7545a1414ec9/volumes" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.784320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzzr\" (UniqueName: \"kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr\") pod \"dnsmasq-dns-7d978555f9-h6mvt\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.834415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.834634 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.834847 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.837439 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snwq\" (UniqueName: \"kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.841255 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.851584 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.852362 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.866575 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.878887 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.900599 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snwq\" (UniqueName: \"kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:09 crc kubenswrapper[4853]: I1122 07:49:09.903735 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config\") pod \"neutron-788f5f9d9b-xptsh\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:10 crc kubenswrapper[4853]: I1122 07:49:10.018629 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:10 crc kubenswrapper[4853]: I1122 07:49:10.167650 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerStarted","Data":"b57e6f590b9b761006158a9cd9ea2f5719e9cdf2e156a0e3fcec171bb63e3cfc"} Nov 22 07:49:10 crc kubenswrapper[4853]: I1122 07:49:10.652540 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.024611 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.131495 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.143116 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.196681 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerStarted","Data":"ea0c29c81af604c2806aa540e24fd88cc29515ac93f686dfc82ad0a3d28e9772"} Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.210890 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" event={"ID":"3c7aba99-05bf-4e98-824d-0a2b56ac555d","Type":"ContainerStarted","Data":"8b5ec68587b93d5da8e8a8727d4171e3a5cea9df806f5029c37cf362f6bb499d"} Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.225945 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerStarted","Data":"85850fb25d0eca8bf7256b7f96e332e4972c97ff52be82de4a7d8e1f6e918c46"} Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.299620 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:49:11 crc kubenswrapper[4853]: I1122 07:49:11.485324 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.244809 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerStarted","Data":"6bb6e672f873c915b40264e896e2d0777b8b5d9bce9f067caa9aed1b90fd8d84"} Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.248705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerStarted","Data":"8e402bba5063452336a420c74fd7026f9c23745dbe2a7f14a8f11f4f18d9b651"} Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.248829 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerStarted","Data":"9802cd6c826da0ea11aa1ae79ac99b721e6b1b46faba4d37eab52a33a3957907"} Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.253304 4853 generic.go:334] "Generic (PLEG): container finished" podID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerID="85da24309510a612ecd052fb661006e8fefb4199197a1e08e2e69eb40d9d14f3" exitCode=0 Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.253470 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" event={"ID":"3c7aba99-05bf-4e98-824d-0a2b56ac555d","Type":"ContainerDied","Data":"85da24309510a612ecd052fb661006e8fefb4199197a1e08e2e69eb40d9d14f3"} Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.253586 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b4ffw" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" containerID="cri-o://a9a9263672f9a28bf870f11897f00cd833a8712fc79992462d3aad95065a783d" gracePeriod=2 Nov 22 07:49:12 crc kubenswrapper[4853]: I1122 07:49:12.295320 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-788f5f9d9b-xptsh" podStartSLOduration=3.295291457 podStartE2EDuration="3.295291457s" podCreationTimestamp="2025-11-22 07:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:12.291268199 +0000 UTC m=+2351.131890845" watchObservedRunningTime="2025-11-22 07:49:12.295291457 +0000 UTC m=+2351.135914083" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.314537 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" event={"ID":"3c7aba99-05bf-4e98-824d-0a2b56ac555d","Type":"ContainerStarted","Data":"90ba218be1e3018d518f38850c5f0e46d956b83ef8198535c2eb232ab15d6bbf"} Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.315297 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.344521 4853 generic.go:334] "Generic (PLEG): container finished" podID="863918c2-c760-4c96-888f-a778bcbb018b" containerID="a9a9263672f9a28bf870f11897f00cd833a8712fc79992462d3aad95065a783d" exitCode=0 Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.344617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerDied","Data":"a9a9263672f9a28bf870f11897f00cd833a8712fc79992462d3aad95065a783d"} Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.360351 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" podStartSLOduration=4.360322463 podStartE2EDuration="4.360322463s" podCreationTimestamp="2025-11-22 07:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:13.353595181 +0000 UTC m=+2352.194217807" watchObservedRunningTime="2025-11-22 07:49:13.360322463 +0000 UTC m=+2352.200945089" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.397683 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerStarted","Data":"c460888c4d820ddce3ffa21d8afe1821af0c334d4de03a59ebe861143ba7fda5"} Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.397737 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.505829 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c78d4ccd7-pvf4q"] Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.518232 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.523907 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c78d4ccd7-pvf4q"] Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.531327 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.531641 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642469 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-combined-ca-bundle\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642639 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrjw\" (UniqueName: \"kubernetes.io/projected/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-kube-api-access-tzrjw\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-internal-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642705 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-ovndb-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-httpd-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642842 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.642940 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-public-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.704529 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:49:13 crc kubenswrapper[4853]: E1122 07:49:13.724019 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:13 crc kubenswrapper[4853]: E1122 07:49:13.731941 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:13 crc kubenswrapper[4853]: E1122 07:49:13.744858 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:13 crc kubenswrapper[4853]: E1122 07:49:13.744978 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b96d96555-h7jqp" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747130 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-combined-ca-bundle\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrjw\" (UniqueName: \"kubernetes.io/projected/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-kube-api-access-tzrjw\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747285 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-internal-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747307 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-ovndb-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-httpd-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747382 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.747453 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-public-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.757702 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-internal-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.771988 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-ovndb-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.774634 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-combined-ca-bundle\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.778314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-public-tls-certs\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.783413 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-httpd-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.807846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-config\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.847316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrjw\" (UniqueName: \"kubernetes.io/projected/47723ce1-f48e-4d1d-a0a8-4f49dfce7070-kube-api-access-tzrjw\") pod \"neutron-7c78d4ccd7-pvf4q\" (UID: \"47723ce1-f48e-4d1d-a0a8-4f49dfce7070\") " pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.849798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcg87\" (UniqueName: \"kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87\") pod \"863918c2-c760-4c96-888f-a778bcbb018b\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.849982 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content\") pod \"863918c2-c760-4c96-888f-a778bcbb018b\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.850114 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities\") pod \"863918c2-c760-4c96-888f-a778bcbb018b\" (UID: \"863918c2-c760-4c96-888f-a778bcbb018b\") " Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.878133 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities" (OuterVolumeSpecName: "utilities") pod "863918c2-c760-4c96-888f-a778bcbb018b" (UID: "863918c2-c760-4c96-888f-a778bcbb018b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.892303 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87" (OuterVolumeSpecName: "kube-api-access-hcg87") pod "863918c2-c760-4c96-888f-a778bcbb018b" (UID: "863918c2-c760-4c96-888f-a778bcbb018b"). InnerVolumeSpecName "kube-api-access-hcg87". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.965661 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcg87\" (UniqueName: \"kubernetes.io/projected/863918c2-c760-4c96-888f-a778bcbb018b-kube-api-access-hcg87\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.974128 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:13 crc kubenswrapper[4853]: I1122 07:49:13.996548 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:13.998365 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cbsz9"] Nov 22 07:49:14 crc kubenswrapper[4853]: E1122 07:49:14.012704 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.012762 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" Nov 22 07:49:14 crc kubenswrapper[4853]: E1122 07:49:14.012833 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="extract-utilities" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.012844 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="extract-utilities" Nov 22 07:49:14 crc kubenswrapper[4853]: E1122 07:49:14.012871 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="extract-content" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.012879 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="extract-content" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.013393 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="863918c2-c760-4c96-888f-a778bcbb018b" containerName="registry-server" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.045154 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cbsz9"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.045320 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.066527 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.066537 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.067295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "863918c2-c760-4c96-888f-a778bcbb018b" (UID: "863918c2-c760-4c96-888f-a778bcbb018b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.084475 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.085145 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5zq\" (UniqueName: \"kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.085181 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.085311 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.085678 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/863918c2-c760-4c96-888f-a778bcbb018b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.118897 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.145279 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.188432 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.197286 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.207578 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.207877 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5zq\" (UniqueName: \"kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.207919 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.208077 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.208107 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.208167 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvp2j\" (UniqueName: \"kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.242317 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.264255 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.275788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.283769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5zq\" (UniqueName: \"kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.292253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data\") pod \"nova-cell0-cell-mapping-cbsz9\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.314462 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.314693 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.314783 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvp2j\" (UniqueName: \"kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.319997 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.321528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.363535 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.363713 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvp2j\" (UniqueName: \"kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j\") pod \"nova-cell1-novncproxy-0\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.437817 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.440504 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.447328 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.503247 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b4ffw" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.507587 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b4ffw" event={"ID":"863918c2-c760-4c96-888f-a778bcbb018b","Type":"ContainerDied","Data":"bde6935f3894d331461ef6321f0eb277fa58a3d4a54f531fbbb81ab1202a246f"} Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.507703 4853 scope.go:117] "RemoveContainer" containerID="a9a9263672f9a28bf870f11897f00cd833a8712fc79992462d3aad95065a783d" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.510575 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.517717 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.605475 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.607106 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x72c\" (UniqueName: \"kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.607239 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.607850 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.641994 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.646359 4853 scope.go:117] "RemoveContainer" containerID="e4b929b6b2c6ec6f2cec73d9b84f4a13113d088987c887b0762e34106a97eb58" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.649935 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.653443 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.660514 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.667080 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.747573 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.747679 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x72c\" (UniqueName: \"kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.747715 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.748927 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.752159 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdtt2\" (UniqueName: \"kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.752239 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.752299 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.752410 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.752478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.753103 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.780681 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.781361 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.796517 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.815707 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x72c\" (UniqueName: \"kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c\") pod \"nova-api-0\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " pod="openstack/nova-api-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.822811 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.841474 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.867331 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdtt2\" (UniqueName: \"kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869214 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869387 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869451 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869592 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869678 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2qm\" (UniqueName: \"kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.869921 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.870957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.872402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.906114 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.915337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.941893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.978848 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.979381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.979595 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.979789 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.979925 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qjd\" (UniqueName: \"kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.980098 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2qm\" (UniqueName: \"kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.980296 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.980679 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:14 crc kubenswrapper[4853]: I1122 07:49:14.981513 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:14.998373 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.008665 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdtt2\" (UniqueName: \"kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2\") pod \"nova-metadata-0\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " pod="openstack/nova-metadata-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.009515 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.026699 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2qm\" (UniqueName: \"kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm\") pod \"nova-scheduler-0\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085284 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085318 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6qjd\" (UniqueName: \"kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085379 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085489 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.085520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.098037 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.098834 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.099812 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.099982 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.100692 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.101094 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.131492 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6qjd\" (UniqueName: \"kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd\") pod \"dnsmasq-dns-7877d89589-qms8z\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.253379 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.341612 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.372483 4853 scope.go:117] "RemoveContainer" containerID="eb48bec721ed10b57f756d146b62cc3cca0429039b2b396723241552af1886a7" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.374510 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.384653 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b4ffw"] Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.528564 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.627929 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="dnsmasq-dns" containerID="cri-o://90ba218be1e3018d518f38850c5f0e46d956b83ef8198535c2eb232ab15d6bbf" gracePeriod=10 Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.878565 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="863918c2-c760-4c96-888f-a778bcbb018b" path="/var/lib/kubelet/pods/863918c2-c760-4c96-888f-a778bcbb018b/volumes" Nov 22 07:49:15 crc kubenswrapper[4853]: I1122 07:49:15.968713 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cbsz9"] Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.021038 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c78d4ccd7-pvf4q"] Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.442659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:16 crc kubenswrapper[4853]: W1122 07:49:16.622374 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cc8da91_f334_4196_aa2f_191e55317490.slice/crio-ecba6d7ab26a7cc1260709f72518be59f3dc013eef7d17573b897ab36d105f97 WatchSource:0}: Error finding container ecba6d7ab26a7cc1260709f72518be59f3dc013eef7d17573b897ab36d105f97: Status 404 returned error can't find the container with id ecba6d7ab26a7cc1260709f72518be59f3dc013eef7d17573b897ab36d105f97 Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.709635 4853 generic.go:334] "Generic (PLEG): container finished" podID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerID="90ba218be1e3018d518f38850c5f0e46d956b83ef8198535c2eb232ab15d6bbf" exitCode=0 Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.710216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" event={"ID":"3c7aba99-05bf-4e98-824d-0a2b56ac555d","Type":"ContainerDied","Data":"90ba218be1e3018d518f38850c5f0e46d956b83ef8198535c2eb232ab15d6bbf"} Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.730040 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.731885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cbsz9" event={"ID":"42ee627d-63e1-4a7f-9da3-aca02dcd4cec","Type":"ContainerStarted","Data":"730ecb62360f5da52d8586fd4c3f911e30e219ec6a3083c01055606d36302a04"} Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.756836 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerStarted","Data":"4a7c1bfa5901a81f06c4f2963121831d49fee56f624c1c1c363bfa002dcee2bb"} Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.757496 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.766831 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c78d4ccd7-pvf4q" event={"ID":"47723ce1-f48e-4d1d-a0a8-4f49dfce7070","Type":"ContainerStarted","Data":"f72ed0d1fbedbf280ee51b921dc5511098c2444fa8151b28933fa47917f27635"} Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.778875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cc8da91-f334-4196-aa2f-191e55317490","Type":"ContainerStarted","Data":"ecba6d7ab26a7cc1260709f72518be59f3dc013eef7d17573b897ab36d105f97"} Nov 22 07:49:16 crc kubenswrapper[4853]: I1122 07:49:16.809325 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.32582845 podStartE2EDuration="8.809299655s" podCreationTimestamp="2025-11-22 07:49:08 +0000 UTC" firstStartedPulling="2025-11-22 07:49:09.352728953 +0000 UTC m=+2348.193351579" lastFinishedPulling="2025-11-22 07:49:14.836200168 +0000 UTC m=+2353.676822784" observedRunningTime="2025-11-22 07:49:16.808135433 +0000 UTC m=+2355.648758059" watchObservedRunningTime="2025-11-22 07:49:16.809299655 +0000 UTC m=+2355.649922281" Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.312297 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.578906 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.684996 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.685084 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.685194 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glzzr\" (UniqueName: \"kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.685386 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.685510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.685644 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb\") pod \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\" (UID: \"3c7aba99-05bf-4e98-824d-0a2b56ac555d\") " Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.716014 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr" (OuterVolumeSpecName: "kube-api-access-glzzr") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "kube-api-access-glzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.823839 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glzzr\" (UniqueName: \"kubernetes.io/projected/3c7aba99-05bf-4e98-824d-0a2b56ac555d-kube-api-access-glzzr\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.869599 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cbsz9" podStartSLOduration=4.869548171 podStartE2EDuration="4.869548171s" podCreationTimestamp="2025-11-22 07:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:17.858265606 +0000 UTC m=+2356.698888232" watchObservedRunningTime="2025-11-22 07:49:17.869548171 +0000 UTC m=+2356.710170797" Nov 22 07:49:17 crc kubenswrapper[4853]: I1122 07:49:17.924242 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.294450 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.330131 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.343555 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.346909 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.347212 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.347274 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.379137 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.389492 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config" (OuterVolumeSpecName: "config") pod "3c7aba99-05bf-4e98-824d-0a2b56ac555d" (UID: "3c7aba99-05bf-4e98-824d-0a2b56ac555d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.450466 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.450517 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c7aba99-05bf-4e98-824d-0a2b56ac555d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.506290 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.506666 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cbsz9" event={"ID":"42ee627d-63e1-4a7f-9da3-aca02dcd4cec","Type":"ContainerStarted","Data":"dcf56e335ecbb41ee55bd67167913c1cce60d9282cd45960933a48273bac10c8"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.506807 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.506920 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerStarted","Data":"38875a176b12afecd7e5465de4e31f03e2d164bf298f0373b52403bd43b47ae9"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.507024 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3bacd6f9-077c-4dee-aeef-3b546162391b","Type":"ContainerStarted","Data":"5a2291e7fb13c2a1b7b68a9370c1118d084c4683fabacdec750083222e596a8b"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.507115 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c78d4ccd7-pvf4q" event={"ID":"47723ce1-f48e-4d1d-a0a8-4f49dfce7070","Type":"ContainerStarted","Data":"221cd133d858c212ed56cab102569e3598c7c3740aeb5a96361e231360177ebe"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.507205 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d978555f9-h6mvt" event={"ID":"3c7aba99-05bf-4e98-824d-0a2b56ac555d","Type":"ContainerDied","Data":"8b5ec68587b93d5da8e8a8727d4171e3a5cea9df806f5029c37cf362f6bb499d"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.507299 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerStarted","Data":"a90de83108b55a574d18846a553ce0499c2965268f31bc48acd0352e5f9537ef"} Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.507350 4853 scope.go:117] "RemoveContainer" containerID="90ba218be1e3018d518f38850c5f0e46d956b83ef8198535c2eb232ab15d6bbf" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.590039 4853 scope.go:117] "RemoveContainer" containerID="85da24309510a612ecd052fb661006e8fefb4199197a1e08e2e69eb40d9d14f3" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.599437 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-blcfh"] Nov 22 07:49:18 crc kubenswrapper[4853]: E1122 07:49:18.601447 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="init" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.601503 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="init" Nov 22 07:49:18 crc kubenswrapper[4853]: E1122 07:49:18.601535 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="dnsmasq-dns" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.601545 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="dnsmasq-dns" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.602452 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" containerName="dnsmasq-dns" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.605450 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.615497 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.618644 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.622540 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.639887 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-blcfh"] Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.656953 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d978555f9-h6mvt"] Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.778256 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.778494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.778911 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.778975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vc57\" (UniqueName: \"kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.881576 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.881980 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vc57\" (UniqueName: \"kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.882112 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.882177 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.900367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.901567 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.906502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vc57\" (UniqueName: \"kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:18 crc kubenswrapper[4853]: I1122 07:49:18.908331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts\") pod \"nova-cell1-conductor-db-sync-blcfh\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.016057 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.206726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c78d4ccd7-pvf4q" event={"ID":"47723ce1-f48e-4d1d-a0a8-4f49dfce7070","Type":"ContainerStarted","Data":"fb7efadb4529bbbf562cfc851bc775b5993616c408e9c22188d50e2ee77863a8"} Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.209996 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.260493 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-qms8z" event={"ID":"d7e1b24e-7343-4816-8c6e-86c7af484d6f","Type":"ContainerStarted","Data":"02a383a74e4b5e75caafb3d528ffc309d5867b0c4a516ec22bb92830506aa954"} Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.262019 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c78d4ccd7-pvf4q" podStartSLOduration=6.261996677 podStartE2EDuration="6.261996677s" podCreationTimestamp="2025-11-22 07:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:19.253296812 +0000 UTC m=+2358.093919448" watchObservedRunningTime="2025-11-22 07:49:19.261996677 +0000 UTC m=+2358.102619303" Nov 22 07:49:19 crc kubenswrapper[4853]: I1122 07:49:19.780809 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c7aba99-05bf-4e98-824d-0a2b56ac555d" path="/var/lib/kubelet/pods/3c7aba99-05bf-4e98-824d-0a2b56ac555d/volumes" Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.014685 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-blcfh"] Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.308162 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerID="2f4e793383f2247cd9af43b859ee01347fae7620a53aa440c54b99aa68461752" exitCode=0 Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.308775 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-qms8z" event={"ID":"d7e1b24e-7343-4816-8c6e-86c7af484d6f","Type":"ContainerDied","Data":"2f4e793383f2247cd9af43b859ee01347fae7620a53aa440c54b99aa68461752"} Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.357176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-blcfh" event={"ID":"41c890fc-832a-4ab4-ad0f-5f41153efa12","Type":"ContainerStarted","Data":"b0648f984bb08781c1b09d7c72a20ed5b089d39e0c8ac9a786c35d44edbb8e62"} Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.615931 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:20 crc kubenswrapper[4853]: I1122 07:49:20.629398 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:21 crc kubenswrapper[4853]: E1122 07:49:21.310527 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f019708_ddfa_465c_850a_7b13a20a87f2.slice/crio-2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.389487 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-qms8z" event={"ID":"d7e1b24e-7343-4816-8c6e-86c7af484d6f","Type":"ContainerStarted","Data":"8715cc97b61666797a2cda87fef90d1b23e6b737f9886f86ac62307a4f22f3f9"} Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.391431 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.396158 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-blcfh" event={"ID":"41c890fc-832a-4ab4-ad0f-5f41153efa12","Type":"ContainerStarted","Data":"0a3d507cb8a93880955404c4d57ab7a986df4e07de719fdbad427bf9d98346f6"} Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.413054 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" exitCode=0 Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.414370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b96d96555-h7jqp" event={"ID":"9f019708-ddfa-465c-850a-7b13a20a87f2","Type":"ContainerDied","Data":"2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917"} Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.422028 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7877d89589-qms8z" podStartSLOduration=7.4219997939999995 podStartE2EDuration="7.421999794s" podCreationTimestamp="2025-11-22 07:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:21.415692774 +0000 UTC m=+2360.256315420" watchObservedRunningTime="2025-11-22 07:49:21.421999794 +0000 UTC m=+2360.262622410" Nov 22 07:49:21 crc kubenswrapper[4853]: I1122 07:49:21.465810 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-blcfh" podStartSLOduration=3.465780656 podStartE2EDuration="3.465780656s" podCreationTimestamp="2025-11-22 07:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:21.441378787 +0000 UTC m=+2360.282001413" watchObservedRunningTime="2025-11-22 07:49:21.465780656 +0000 UTC m=+2360.306403282" Nov 22 07:49:23 crc kubenswrapper[4853]: E1122 07:49:23.718980 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917 is running failed: container process not found" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:23 crc kubenswrapper[4853]: E1122 07:49:23.722481 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917 is running failed: container process not found" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:23 crc kubenswrapper[4853]: E1122 07:49:23.723006 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917 is running failed: container process not found" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:49:23 crc kubenswrapper[4853]: E1122 07:49:23.723106 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-5b96d96555-h7jqp" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.864310 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.942312 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data\") pod \"9f019708-ddfa-465c-850a-7b13a20a87f2\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.942496 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfssp\" (UniqueName: \"kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp\") pod \"9f019708-ddfa-465c-850a-7b13a20a87f2\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.942683 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle\") pod \"9f019708-ddfa-465c-850a-7b13a20a87f2\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.949538 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom\") pod \"9f019708-ddfa-465c-850a-7b13a20a87f2\" (UID: \"9f019708-ddfa-465c-850a-7b13a20a87f2\") " Nov 22 07:49:24 crc kubenswrapper[4853]: I1122 07:49:24.977931 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp" (OuterVolumeSpecName: "kube-api-access-kfssp") pod "9f019708-ddfa-465c-850a-7b13a20a87f2" (UID: "9f019708-ddfa-465c-850a-7b13a20a87f2"). InnerVolumeSpecName "kube-api-access-kfssp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.039780 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f019708-ddfa-465c-850a-7b13a20a87f2" (UID: "9f019708-ddfa-465c-850a-7b13a20a87f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.047172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f019708-ddfa-465c-850a-7b13a20a87f2" (UID: "9f019708-ddfa-465c-850a-7b13a20a87f2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.066059 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.066105 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.066119 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfssp\" (UniqueName: \"kubernetes.io/projected/9f019708-ddfa-465c-850a-7b13a20a87f2-kube-api-access-kfssp\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.176012 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data" (OuterVolumeSpecName: "config-data") pod "9f019708-ddfa-465c-850a-7b13a20a87f2" (UID: "9f019708-ddfa-465c-850a-7b13a20a87f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.276536 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f019708-ddfa-465c-850a-7b13a20a87f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.515762 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b96d96555-h7jqp" event={"ID":"9f019708-ddfa-465c-850a-7b13a20a87f2","Type":"ContainerDied","Data":"dc78592dc7ecea4bc7e1d74ac2f7ea045e0baf6eba3818bd1b43c51935f93b34"} Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.515827 4853 scope.go:117] "RemoveContainer" containerID="2e26072ba72281cc34fd01c470e7381419acf03b04f60311406a45f61202e917" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.516011 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b96d96555-h7jqp" Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.573590 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.585239 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b96d96555-h7jqp"] Nov 22 07:49:25 crc kubenswrapper[4853]: I1122 07:49:25.770203 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" path="/var/lib/kubelet/pods/9f019708-ddfa-465c-850a-7b13a20a87f2/volumes" Nov 22 07:49:26 crc kubenswrapper[4853]: I1122 07:49:26.460178 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:26 crc kubenswrapper[4853]: I1122 07:49:26.480877 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:26 crc kubenswrapper[4853]: I1122 07:49:26.522809 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:26 crc kubenswrapper[4853]: I1122 07:49:26.523130 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="df14bfb5-652f-4e60-a709-e3ed7348d00a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://dc710994837cfb60b29ae2d2d75f810962975614729cd3fc2ed54dd1067f34ef" gracePeriod=30 Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.579008 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerStarted","Data":"c9f1c4bc31bec9841c2da3f868aa9bf524a9e4517077a71c16a01be4702a0792"} Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.596471 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerStarted","Data":"7c7ef19ccfe77d17f8cd14bd9d6d5176f607d286a5715937e29d4512211c382f"} Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.598287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3bacd6f9-077c-4dee-aeef-3b546162391b","Type":"ContainerStarted","Data":"1fca9e6b7a954fe50e4691944dff87a8fe48c7d5ac441dfd28dc0fec8f8c1571"} Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.598447 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3bacd6f9-077c-4dee-aeef-3b546162391b" containerName="nova-scheduler-scheduler" containerID="cri-o://1fca9e6b7a954fe50e4691944dff87a8fe48c7d5ac441dfd28dc0fec8f8c1571" gracePeriod=30 Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.628400 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cc8da91-f334-4196-aa2f-191e55317490","Type":"ContainerStarted","Data":"8deaf242fe95930b41dd1a53aef0b8dd68204d09ede1a322ab27c05f44be1dac"} Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.628950 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="1cc8da91-f334-4196-aa2f-191e55317490" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8deaf242fe95930b41dd1a53aef0b8dd68204d09ede1a322ab27c05f44be1dac" gracePeriod=30 Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.630612 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.79974391 podStartE2EDuration="13.630579067s" podCreationTimestamp="2025-11-22 07:49:14 +0000 UTC" firstStartedPulling="2025-11-22 07:49:17.794023744 +0000 UTC m=+2356.634646370" lastFinishedPulling="2025-11-22 07:49:26.624858901 +0000 UTC m=+2365.465481527" observedRunningTime="2025-11-22 07:49:27.625190401 +0000 UTC m=+2366.465813027" watchObservedRunningTime="2025-11-22 07:49:27.630579067 +0000 UTC m=+2366.471201693" Nov 22 07:49:27 crc kubenswrapper[4853]: I1122 07:49:27.673451 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.72080508 podStartE2EDuration="14.673422902s" podCreationTimestamp="2025-11-22 07:49:13 +0000 UTC" firstStartedPulling="2025-11-22 07:49:16.6704314 +0000 UTC m=+2355.511054036" lastFinishedPulling="2025-11-22 07:49:26.623049242 +0000 UTC m=+2365.463671858" observedRunningTime="2025-11-22 07:49:27.652986231 +0000 UTC m=+2366.493608887" watchObservedRunningTime="2025-11-22 07:49:27.673422902 +0000 UTC m=+2366.514045528" Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.645403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerStarted","Data":"fea2d9e9df49c739fc4735c8f38b894b5a2d8a29149609c8b1c5c0fec5eee49b"} Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.645569 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-log" containerID="cri-o://c9f1c4bc31bec9841c2da3f868aa9bf524a9e4517077a71c16a01be4702a0792" gracePeriod=30 Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.645641 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-api" containerID="cri-o://fea2d9e9df49c739fc4735c8f38b894b5a2d8a29149609c8b1c5c0fec5eee49b" gracePeriod=30 Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.650251 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerStarted","Data":"f06330da3a188696c34c64cd2abbddcb6cc2b5046d0d51d49506e105c36ac344"} Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.650462 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-log" containerID="cri-o://7c7ef19ccfe77d17f8cd14bd9d6d5176f607d286a5715937e29d4512211c382f" gracePeriod=30 Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.650568 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-metadata" containerID="cri-o://f06330da3a188696c34c64cd2abbddcb6cc2b5046d0d51d49506e105c36ac344" gracePeriod=30 Nov 22 07:49:28 crc kubenswrapper[4853]: I1122 07:49:28.706425 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.86242738 podStartE2EDuration="14.706402903s" podCreationTimestamp="2025-11-22 07:49:14 +0000 UTC" firstStartedPulling="2025-11-22 07:49:16.779071719 +0000 UTC m=+2355.619694345" lastFinishedPulling="2025-11-22 07:49:26.623047242 +0000 UTC m=+2365.463669868" observedRunningTime="2025-11-22 07:49:28.682152499 +0000 UTC m=+2367.522775125" watchObservedRunningTime="2025-11-22 07:49:28.706402903 +0000 UTC m=+2367.547025529" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.669684 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.699187 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=6.480352246 podStartE2EDuration="15.699167678s" podCreationTimestamp="2025-11-22 07:49:14 +0000 UTC" firstStartedPulling="2025-11-22 07:49:17.415054762 +0000 UTC m=+2356.255677388" lastFinishedPulling="2025-11-22 07:49:26.633870194 +0000 UTC m=+2365.474492820" observedRunningTime="2025-11-22 07:49:28.732926128 +0000 UTC m=+2367.573548774" watchObservedRunningTime="2025-11-22 07:49:29.699167678 +0000 UTC m=+2368.539790304" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.718611 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.719021 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-central-agent" containerID="cri-o://85850fb25d0eca8bf7256b7f96e332e4972c97ff52be82de4a7d8e1f6e918c46" gracePeriod=30 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.719734 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" containerID="cri-o://4a7c1bfa5901a81f06c4f2963121831d49fee56f624c1c1c363bfa002dcee2bb" gracePeriod=30 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.720120 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="sg-core" containerID="cri-o://c460888c4d820ddce3ffa21d8afe1821af0c334d4de03a59ebe861143ba7fda5" gracePeriod=30 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.720173 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-notification-agent" containerID="cri-o://6bb6e672f873c915b40264e896e2d0777b8b5d9bce9f067caa9aed1b90fd8d84" gracePeriod=30 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.728453 4853 generic.go:334] "Generic (PLEG): container finished" podID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerID="f06330da3a188696c34c64cd2abbddcb6cc2b5046d0d51d49506e105c36ac344" exitCode=0 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.728490 4853 generic.go:334] "Generic (PLEG): container finished" podID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerID="7c7ef19ccfe77d17f8cd14bd9d6d5176f607d286a5715937e29d4512211c382f" exitCode=143 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.728570 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerDied","Data":"f06330da3a188696c34c64cd2abbddcb6cc2b5046d0d51d49506e105c36ac344"} Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.728601 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerDied","Data":"7c7ef19ccfe77d17f8cd14bd9d6d5176f607d286a5715937e29d4512211c382f"} Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.758994 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.782799 4853 generic.go:334] "Generic (PLEG): container finished" podID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerID="fea2d9e9df49c739fc4735c8f38b894b5a2d8a29149609c8b1c5c0fec5eee49b" exitCode=0 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.782864 4853 generic.go:334] "Generic (PLEG): container finished" podID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerID="c9f1c4bc31bec9841c2da3f868aa9bf524a9e4517077a71c16a01be4702a0792" exitCode=143 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.785488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerDied","Data":"fea2d9e9df49c739fc4735c8f38b894b5a2d8a29149609c8b1c5c0fec5eee49b"} Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.785526 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerDied","Data":"c9f1c4bc31bec9841c2da3f868aa9bf524a9e4517077a71c16a01be4702a0792"} Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.795037 4853 generic.go:334] "Generic (PLEG): container finished" podID="df14bfb5-652f-4e60-a709-e3ed7348d00a" containerID="dc710994837cfb60b29ae2d2d75f810962975614729cd3fc2ed54dd1067f34ef" exitCode=0 Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.795081 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df14bfb5-652f-4e60-a709-e3ed7348d00a","Type":"ContainerDied","Data":"dc710994837cfb60b29ae2d2d75f810962975614729cd3fc2ed54dd1067f34ef"} Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.806156 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.847892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs\") pod \"96bf10be-9206-4b95-af69-8f41e5e530c6\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.848103 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x72c\" (UniqueName: \"kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c\") pod \"96bf10be-9206-4b95-af69-8f41e5e530c6\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.848253 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data\") pod \"96bf10be-9206-4b95-af69-8f41e5e530c6\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.848426 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle\") pod \"96bf10be-9206-4b95-af69-8f41e5e530c6\" (UID: \"96bf10be-9206-4b95-af69-8f41e5e530c6\") " Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.855520 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs" (OuterVolumeSpecName: "logs") pod "96bf10be-9206-4b95-af69-8f41e5e530c6" (UID: "96bf10be-9206-4b95-af69-8f41e5e530c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.871543 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c" (OuterVolumeSpecName: "kube-api-access-2x72c") pod "96bf10be-9206-4b95-af69-8f41e5e530c6" (UID: "96bf10be-9206-4b95-af69-8f41e5e530c6"). InnerVolumeSpecName "kube-api-access-2x72c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.953188 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bf10be-9206-4b95-af69-8f41e5e530c6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.953267 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x72c\" (UniqueName: \"kubernetes.io/projected/96bf10be-9206-4b95-af69-8f41e5e530c6-kube-api-access-2x72c\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.954927 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data" (OuterVolumeSpecName: "config-data") pod "96bf10be-9206-4b95-af69-8f41e5e530c6" (UID: "96bf10be-9206-4b95-af69-8f41e5e530c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.995789 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:29 crc kubenswrapper[4853]: E1122 07:49:29.996499 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-log" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.996516 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-log" Nov 22 07:49:29 crc kubenswrapper[4853]: E1122 07:49:29.996589 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-api" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.996599 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-api" Nov 22 07:49:29 crc kubenswrapper[4853]: E1122 07:49:29.996620 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.996627 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.996941 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-api" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.996957 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" containerName="nova-api-log" Nov 22 07:49:29 crc kubenswrapper[4853]: I1122 07:49:29.997197 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f019708-ddfa-465c-850a-7b13a20a87f2" containerName="heat-engine" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:29.999700 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.016578 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.029531 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96bf10be-9206-4b95-af69-8f41e5e530c6" (UID: "96bf10be-9206-4b95-af69-8f41e5e530c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.055300 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.055494 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.055536 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7gz\" (UniqueName: \"kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.055623 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.055635 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bf10be-9206-4b95-af69-8f41e5e530c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.120297 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.160505 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdtt2\" (UniqueName: \"kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2\") pod \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.160638 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle\") pod \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.160902 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs\") pod \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.161130 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data\") pod \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\" (UID: \"9c1fb660-eb53-4b4c-8456-4c3288a27a7a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.161585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.161613 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7gz\" (UniqueName: \"kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.164204 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs" (OuterVolumeSpecName: "logs") pod "9c1fb660-eb53-4b4c-8456-4c3288a27a7a" (UID: "9c1fb660-eb53-4b4c-8456-4c3288a27a7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.164463 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.164856 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.170797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.171628 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.177900 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2" (OuterVolumeSpecName: "kube-api-access-kdtt2") pod "9c1fb660-eb53-4b4c-8456-4c3288a27a7a" (UID: "9c1fb660-eb53-4b4c-8456-4c3288a27a7a"). InnerVolumeSpecName "kube-api-access-kdtt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.194160 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7gz\" (UniqueName: \"kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz\") pod \"certified-operators-58t9w\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.278855 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdtt2\" (UniqueName: \"kubernetes.io/projected/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-kube-api-access-kdtt2\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.289484 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.315045 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data" (OuterVolumeSpecName: "config-data") pod "9c1fb660-eb53-4b4c-8456-4c3288a27a7a" (UID: "9c1fb660-eb53-4b4c-8456-4c3288a27a7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.328875 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.330991 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c1fb660-eb53-4b4c-8456-4c3288a27a7a" (UID: "9c1fb660-eb53-4b4c-8456-4c3288a27a7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.375075 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.380811 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle\") pod \"df14bfb5-652f-4e60-a709-e3ed7348d00a\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.381153 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrrnb\" (UniqueName: \"kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb\") pod \"df14bfb5-652f-4e60-a709-e3ed7348d00a\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.381392 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data\") pod \"df14bfb5-652f-4e60-a709-e3ed7348d00a\" (UID: \"df14bfb5-652f-4e60-a709-e3ed7348d00a\") " Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.382091 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.382170 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c1fb660-eb53-4b4c-8456-4c3288a27a7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.395033 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb" (OuterVolumeSpecName: "kube-api-access-wrrnb") pod "df14bfb5-652f-4e60-a709-e3ed7348d00a" (UID: "df14bfb5-652f-4e60-a709-e3ed7348d00a"). InnerVolumeSpecName "kube-api-access-wrrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.473739 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df14bfb5-652f-4e60-a709-e3ed7348d00a" (UID: "df14bfb5-652f-4e60-a709-e3ed7348d00a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.475655 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data" (OuterVolumeSpecName: "config-data") pod "df14bfb5-652f-4e60-a709-e3ed7348d00a" (UID: "df14bfb5-652f-4e60-a709-e3ed7348d00a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.484874 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.484912 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrrnb\" (UniqueName: \"kubernetes.io/projected/df14bfb5-652f-4e60-a709-e3ed7348d00a-kube-api-access-wrrnb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.484926 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df14bfb5-652f-4e60-a709-e3ed7348d00a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.545005 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.656964 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.657530 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="dnsmasq-dns" containerID="cri-o://58560fa99378d76cae1cfb758a089593a15ad59a0b3f97c6f5e4bac473b2baae" gracePeriod=10 Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.823786 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"96bf10be-9206-4b95-af69-8f41e5e530c6","Type":"ContainerDied","Data":"a90de83108b55a574d18846a553ce0499c2965268f31bc48acd0352e5f9537ef"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.823848 4853 scope.go:117] "RemoveContainer" containerID="fea2d9e9df49c739fc4735c8f38b894b5a2d8a29149609c8b1c5c0fec5eee49b" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.824020 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.839860 4853 generic.go:334] "Generic (PLEG): container finished" podID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerID="58560fa99378d76cae1cfb758a089593a15ad59a0b3f97c6f5e4bac473b2baae" exitCode=0 Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.840314 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" event={"ID":"3e4475d3-9059-4761-8a99-ad8e31d01947","Type":"ContainerDied","Data":"58560fa99378d76cae1cfb758a089593a15ad59a0b3f97c6f5e4bac473b2baae"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.844843 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"df14bfb5-652f-4e60-a709-e3ed7348d00a","Type":"ContainerDied","Data":"9d4664a2cf9511eb48195a674b303a4560f45846df8891a5b73a804170cbb0af"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.844946 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.883071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9c1fb660-eb53-4b4c-8456-4c3288a27a7a","Type":"ContainerDied","Data":"38875a176b12afecd7e5465de4e31f03e2d164bf298f0373b52403bd43b47ae9"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.883196 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.896895 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926068 4853 generic.go:334] "Generic (PLEG): container finished" podID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerID="4a7c1bfa5901a81f06c4f2963121831d49fee56f624c1c1c363bfa002dcee2bb" exitCode=0 Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926121 4853 generic.go:334] "Generic (PLEG): container finished" podID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerID="c460888c4d820ddce3ffa21d8afe1821af0c334d4de03a59ebe861143ba7fda5" exitCode=2 Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926133 4853 generic.go:334] "Generic (PLEG): container finished" podID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerID="6bb6e672f873c915b40264e896e2d0777b8b5d9bce9f067caa9aed1b90fd8d84" exitCode=0 Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926166 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerDied","Data":"4a7c1bfa5901a81f06c4f2963121831d49fee56f624c1c1c363bfa002dcee2bb"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926231 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerDied","Data":"c460888c4d820ddce3ffa21d8afe1821af0c334d4de03a59ebe861143ba7fda5"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.926249 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerDied","Data":"6bb6e672f873c915b40264e896e2d0777b8b5d9bce9f067caa9aed1b90fd8d84"} Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.927185 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.939912 4853 scope.go:117] "RemoveContainer" containerID="c9f1c4bc31bec9841c2da3f868aa9bf524a9e4517077a71c16a01be4702a0792" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.940074 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:30 crc kubenswrapper[4853]: E1122 07:49:30.941210 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df14bfb5-652f-4e60-a709-e3ed7348d00a" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941237 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="df14bfb5-652f-4e60-a709-e3ed7348d00a" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:30 crc kubenswrapper[4853]: E1122 07:49:30.941259 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-metadata" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941265 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-metadata" Nov 22 07:49:30 crc kubenswrapper[4853]: E1122 07:49:30.941314 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-log" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941321 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-log" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941583 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="df14bfb5-652f-4e60-a709-e3ed7348d00a" containerName="nova-cell0-conductor-conductor" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941609 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-log" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.941629 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" containerName="nova-metadata-metadata" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.943550 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.954467 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:49:30 crc kubenswrapper[4853]: I1122 07:49:30.986061 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.008718 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.008831 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.009022 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvcj\" (UniqueName: \"kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.009053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.038890 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.057195 4853 scope.go:117] "RemoveContainer" containerID="dc710994837cfb60b29ae2d2d75f810962975614729cd3fc2ed54dd1067f34ef" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.090070 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.111853 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.112717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvcj\" (UniqueName: \"kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.112810 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.113052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.113113 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.114239 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.115676 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.120821 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.124420 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.128106 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.133564 4853 scope.go:117] "RemoveContainer" containerID="f06330da3a188696c34c64cd2abbddcb6cc2b5046d0d51d49506e105c36ac344" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.145800 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvcj\" (UniqueName: \"kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj\") pod \"nova-api-0\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.145879 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.170771 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.234219 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.297734 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.329291 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.330244 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.330453 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sc4n\" (UniqueName: \"kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.350499 4853 scope.go:117] "RemoveContainer" containerID="7c7ef19ccfe77d17f8cd14bd9d6d5176f607d286a5715937e29d4512211c382f" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.365913 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.412116 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.420809 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.424278 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.454960 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.457322 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sc4n\" (UniqueName: \"kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.458812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.458895 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.466503 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.511054 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.523041 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sc4n\" (UniqueName: \"kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n\") pod \"nova-cell0-conductor-0\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.561899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.561970 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.561995 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.562042 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t2vr\" (UniqueName: \"kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.562097 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.615214 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.669568 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.677491 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.677567 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.677665 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t2vr\" (UniqueName: \"kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.677802 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.683127 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.684620 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.694645 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.695220 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.697666 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.708110 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.732617 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t2vr\" (UniqueName: \"kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr\") pod \"nova-metadata-0\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " pod="openstack/nova-metadata-0" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797388 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797447 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797589 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797783 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797867 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chcs4\" (UniqueName: \"kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.797907 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.847152 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bf10be-9206-4b95-af69-8f41e5e530c6" path="/var/lib/kubelet/pods/96bf10be-9206-4b95-af69-8f41e5e530c6/volumes" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.863695 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c1fb660-eb53-4b4c-8456-4c3288a27a7a" path="/var/lib/kubelet/pods/9c1fb660-eb53-4b4c-8456-4c3288a27a7a/volumes" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.864886 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df14bfb5-652f-4e60-a709-e3ed7348d00a" path="/var/lib/kubelet/pods/df14bfb5-652f-4e60-a709-e3ed7348d00a/volumes" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.940422 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4" (OuterVolumeSpecName: "kube-api-access-chcs4") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "kube-api-access-chcs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:31 crc kubenswrapper[4853]: I1122 07:49:31.941378 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.010940 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.011354 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.026058 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chcs4\" (UniqueName: \"kubernetes.io/projected/3e4475d3-9059-4761-8a99-ad8e31d01947-kube-api-access-chcs4\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.026090 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.026103 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.105078 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.116777 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.140894 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.154153 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config" (OuterVolumeSpecName: "config") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.242247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.242703 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") pod \"3e4475d3-9059-4761-8a99-ad8e31d01947\" (UID: \"3e4475d3-9059-4761-8a99-ad8e31d01947\") " Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.243641 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: W1122 07:49:32.247750 4853 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3e4475d3-9059-4761-8a99-ad8e31d01947/volumes/kubernetes.io~configmap/ovsdbserver-sb Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.247800 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e4475d3-9059-4761-8a99-ad8e31d01947" (UID: "3e4475d3-9059-4761-8a99-ad8e31d01947"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.259153 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-78e4-account-create-7mqjx"] Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.259188 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-78e4-account-create-7mqjx"] Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.259207 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerStarted","Data":"9484e7e5cc838a46180e6cc29cb5db16398888e23d51709a406c4fb08c64b834"} Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.259240 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6484d7cc-qgkhv" event={"ID":"3e4475d3-9059-4761-8a99-ad8e31d01947","Type":"ContainerDied","Data":"719506d983c33debf4630f5d37df276f5fbfd79fd69ee92fbca7ec6b802525ec"} Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.259283 4853 scope.go:117] "RemoveContainer" containerID="58560fa99378d76cae1cfb758a089593a15ad59a0b3f97c6f5e4bac473b2baae" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.351961 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e4475d3-9059-4761-8a99-ad8e31d01947-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.396431 4853 scope.go:117] "RemoveContainer" containerID="69c1a1537094a06b0e1898039c33ab49ca3e2187370b00cda9e43195bdaa1cc0" Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.494943 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.506140 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6484d7cc-qgkhv"] Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.558156 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:32 crc kubenswrapper[4853]: I1122 07:49:32.870208 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.001656 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.169838 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerStarted","Data":"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c"} Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.170283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerStarted","Data":"a2368c4aca7c384595f2d27bb329e47c735819e3037353f2af641f6ea493c69e"} Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.191742 4853 generic.go:334] "Generic (PLEG): container finished" podID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerID="8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736" exitCode=0 Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.191866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerDied","Data":"8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736"} Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.203391 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f","Type":"ContainerStarted","Data":"c56b87aec77c4883ae90ad40032905ca446a503b5b66aa3be442189147b93f01"} Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.205812 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerStarted","Data":"ad13b7854de5a67033118efd33364b5adc9cebe15eabda74f003a8e7a8841630"} Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.766305 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" path="/var/lib/kubelet/pods/3e4475d3-9059-4761-8a99-ad8e31d01947/volumes" Nov 22 07:49:33 crc kubenswrapper[4853]: I1122 07:49:33.768552 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f11ce6-a3d1-43d6-b94b-36c0b37e1959" path="/var/lib/kubelet/pods/98f11ce6-a3d1-43d6-b94b-36c0b37e1959/volumes" Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.218937 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerStarted","Data":"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10"} Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.221578 4853 generic.go:334] "Generic (PLEG): container finished" podID="42ee627d-63e1-4a7f-9da3-aca02dcd4cec" containerID="dcf56e335ecbb41ee55bd67167913c1cce60d9282cd45960933a48273bac10c8" exitCode=0 Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.221685 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cbsz9" event={"ID":"42ee627d-63e1-4a7f-9da3-aca02dcd4cec","Type":"ContainerDied","Data":"dcf56e335ecbb41ee55bd67167913c1cce60d9282cd45960933a48273bac10c8"} Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.225353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f","Type":"ContainerStarted","Data":"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df"} Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.226055 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.227844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerStarted","Data":"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359"} Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.227892 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerStarted","Data":"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111"} Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.257625 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.257594036 podStartE2EDuration="4.257594036s" podCreationTimestamp="2025-11-22 07:49:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:34.246120935 +0000 UTC m=+2373.086743571" watchObservedRunningTime="2025-11-22 07:49:34.257594036 +0000 UTC m=+2373.098216662" Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.328564 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.3285407989999998 podStartE2EDuration="3.328540799s" podCreationTimestamp="2025-11-22 07:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:34.298705154 +0000 UTC m=+2373.139327780" watchObservedRunningTime="2025-11-22 07:49:34.328540799 +0000 UTC m=+2373.169163425" Nov 22 07:49:34 crc kubenswrapper[4853]: I1122 07:49:34.343040 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.34301679 podStartE2EDuration="3.34301679s" podCreationTimestamp="2025-11-22 07:49:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:34.315385204 +0000 UTC m=+2373.156007850" watchObservedRunningTime="2025-11-22 07:49:34.34301679 +0000 UTC m=+2373.183639416" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.161882 4853 scope.go:117] "RemoveContainer" containerID="2833634a0e6caed565042c3df0b12b4a476d0c0850b583d50fdc424f26c80a64" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.209142 4853 scope.go:117] "RemoveContainer" containerID="ba2ee58cccfe4bfff8eecc7c72c2a0c42def80d00bd4c58d443d1db0f2af54dd" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.254453 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerStarted","Data":"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee"} Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.788455 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.909354 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle\") pod \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.909493 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data\") pod \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.909563 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts\") pod \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.909629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5zq\" (UniqueName: \"kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq\") pod \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\" (UID: \"42ee627d-63e1-4a7f-9da3-aca02dcd4cec\") " Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.923745 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq" (OuterVolumeSpecName: "kube-api-access-vd5zq") pod "42ee627d-63e1-4a7f-9da3-aca02dcd4cec" (UID: "42ee627d-63e1-4a7f-9da3-aca02dcd4cec"). InnerVolumeSpecName "kube-api-access-vd5zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.930228 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts" (OuterVolumeSpecName: "scripts") pod "42ee627d-63e1-4a7f-9da3-aca02dcd4cec" (UID: "42ee627d-63e1-4a7f-9da3-aca02dcd4cec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.952383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42ee627d-63e1-4a7f-9da3-aca02dcd4cec" (UID: "42ee627d-63e1-4a7f-9da3-aca02dcd4cec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:35 crc kubenswrapper[4853]: I1122 07:49:35.963768 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data" (OuterVolumeSpecName: "config-data") pod "42ee627d-63e1-4a7f-9da3-aca02dcd4cec" (UID: "42ee627d-63e1-4a7f-9da3-aca02dcd4cec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.020279 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.020658 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.020679 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd5zq\" (UniqueName: \"kubernetes.io/projected/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-kube-api-access-vd5zq\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.020699 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42ee627d-63e1-4a7f-9da3-aca02dcd4cec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.035836 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b744-account-create-frzkl"] Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.047088 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b744-account-create-frzkl"] Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.267981 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cbsz9" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.268517 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cbsz9" event={"ID":"42ee627d-63e1-4a7f-9da3-aca02dcd4cec","Type":"ContainerDied","Data":"730ecb62360f5da52d8586fd4c3f911e30e219ec6a3083c01055606d36302a04"} Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.268556 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="730ecb62360f5da52d8586fd4c3f911e30e219ec6a3083c01055606d36302a04" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.943817 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:49:36 crc kubenswrapper[4853]: I1122 07:49:36.944248 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.041863 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-r5xl6"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.056910 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2ac2-account-create-w5tng"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.071498 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-r5xl6"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.084956 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-jc942"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.096085 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2ac2-account-create-w5tng"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.107345 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-jc942"] Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.774130 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26080cb8-1363-43b9-aec1-e84e5bd13de2" path="/var/lib/kubelet/pods/26080cb8-1363-43b9-aec1-e84e5bd13de2/volumes" Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.833332 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cdfe3e8-bc06-4691-86c0-4e409315cdf9" path="/var/lib/kubelet/pods/5cdfe3e8-bc06-4691-86c0-4e409315cdf9/volumes" Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.834316 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31a521c-9c4a-40fd-b320-4ebb0ff0fa23" path="/var/lib/kubelet/pods/c31a521c-9c4a-40fd-b320-4ebb0ff0fa23/volumes" Nov 22 07:49:37 crc kubenswrapper[4853]: I1122 07:49:37.835094 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59d32a8-a318-40f1-9cfe-f10d7d2f31cb" path="/var/lib/kubelet/pods/e59d32a8-a318-40f1-9cfe-f10d7d2f31cb/volumes" Nov 22 07:49:38 crc kubenswrapper[4853]: I1122 07:49:38.035678 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-95nmd"] Nov 22 07:49:38 crc kubenswrapper[4853]: I1122 07:49:38.048411 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-95nmd"] Nov 22 07:49:38 crc kubenswrapper[4853]: I1122 07:49:38.473787 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.234:3000/\": dial tcp 10.217.0.234:3000: connect: connection refused" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.040610 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4d23-account-create-4kkcm"] Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.056842 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4d23-account-create-4kkcm"] Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.313785 4853 generic.go:334] "Generic (PLEG): container finished" podID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerID="85850fb25d0eca8bf7256b7f96e332e4972c97ff52be82de4a7d8e1f6e918c46" exitCode=0 Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.314275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerDied","Data":"85850fb25d0eca8bf7256b7f96e332e4972c97ff52be82de4a7d8e1f6e918c46"} Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.333632 4853 generic.go:334] "Generic (PLEG): container finished" podID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerID="c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee" exitCode=0 Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.333702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerDied","Data":"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee"} Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.704141 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.763735 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc" path="/var/lib/kubelet/pods/99f8fbb2-9f4f-48a1-bbe6-ae11d68fc2cc/volumes" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.765024 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4476ce0-7ffe-489f-a7b8-8375a7980bfb" path="/var/lib/kubelet/pods/f4476ce0-7ffe-489f-a7b8-8375a7980bfb/volumes" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841026 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841453 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841526 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv2cj\" (UniqueName: \"kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841738 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841857 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841906 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.841947 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data\") pod \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\" (UID: \"01047ee7-2bc8-487e-a7f2-8696bd86fd13\") " Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.843495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.844004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.871833 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj" (OuterVolumeSpecName: "kube-api-access-vv2cj") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "kube-api-access-vv2cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.871223 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts" (OuterVolumeSpecName: "scripts") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.943101 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.949032 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.949078 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.949092 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.949106 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01047ee7-2bc8-487e-a7f2-8696bd86fd13-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.949118 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv2cj\" (UniqueName: \"kubernetes.io/projected/01047ee7-2bc8-487e-a7f2-8696bd86fd13-kube-api-access-vv2cj\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:39 crc kubenswrapper[4853]: I1122 07:49:39.958466 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.029617 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.034983 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.063483 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.063524 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.072070 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data" (OuterVolumeSpecName: "config-data") pod "01047ee7-2bc8-487e-a7f2-8696bd86fd13" (UID: "01047ee7-2bc8-487e-a7f2-8696bd86fd13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.166731 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01047ee7-2bc8-487e-a7f2-8696bd86fd13-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.373050 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerStarted","Data":"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f"} Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.376336 4853 generic.go:334] "Generic (PLEG): container finished" podID="41c890fc-832a-4ab4-ad0f-5f41153efa12" containerID="0a3d507cb8a93880955404c4d57ab7a986df4e07de719fdbad427bf9d98346f6" exitCode=0 Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.376422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-blcfh" event={"ID":"41c890fc-832a-4ab4-ad0f-5f41153efa12","Type":"ContainerDied","Data":"0a3d507cb8a93880955404c4d57ab7a986df4e07de719fdbad427bf9d98346f6"} Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.380809 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01047ee7-2bc8-487e-a7f2-8696bd86fd13","Type":"ContainerDied","Data":"b57e6f590b9b761006158a9cd9ea2f5719e9cdf2e156a0e3fcec171bb63e3cfc"} Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.380870 4853 scope.go:117] "RemoveContainer" containerID="4a7c1bfa5901a81f06c4f2963121831d49fee56f624c1c1c363bfa002dcee2bb" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.381133 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.418798 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58t9w" podStartSLOduration=4.712943186 podStartE2EDuration="11.418743088s" podCreationTimestamp="2025-11-22 07:49:29 +0000 UTC" firstStartedPulling="2025-11-22 07:49:33.195581822 +0000 UTC m=+2372.036204448" lastFinishedPulling="2025-11-22 07:49:39.901381724 +0000 UTC m=+2378.742004350" observedRunningTime="2025-11-22 07:49:40.393188959 +0000 UTC m=+2379.233811595" watchObservedRunningTime="2025-11-22 07:49:40.418743088 +0000 UTC m=+2379.259365714" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.484434 4853 scope.go:117] "RemoveContainer" containerID="c460888c4d820ddce3ffa21d8afe1821af0c334d4de03a59ebe861143ba7fda5" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.516247 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.569512 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.574984 4853 scope.go:117] "RemoveContainer" containerID="6bb6e672f873c915b40264e896e2d0777b8b5d9bce9f067caa9aed1b90fd8d84" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.598380 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599486 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="init" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599510 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="init" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599534 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="sg-core" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599571 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="sg-core" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599600 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ee627d-63e1-4a7f-9da3-aca02dcd4cec" containerName="nova-manage" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599609 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ee627d-63e1-4a7f-9da3-aca02dcd4cec" containerName="nova-manage" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599636 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-central-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599642 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-central-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599649 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-notification-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599658 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-notification-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599725 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599737 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" Nov 22 07:49:40 crc kubenswrapper[4853]: E1122 07:49:40.599773 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="dnsmasq-dns" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.599781 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="dnsmasq-dns" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600083 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ee627d-63e1-4a7f-9da3-aca02dcd4cec" containerName="nova-manage" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600107 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-notification-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600118 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4475d3-9059-4761-8a99-ad8e31d01947" containerName="dnsmasq-dns" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600136 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="proxy-httpd" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600148 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="ceilometer-central-agent" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.600155 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" containerName="sg-core" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.604741 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.607789 4853 scope.go:117] "RemoveContainer" containerID="85850fb25d0eca8bf7256b7f96e332e4972c97ff52be82de4a7d8e1f6e918c46" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.610105 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.610352 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.610352 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.617689 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.682504 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.682745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683085 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683396 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683436 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pgl8\" (UniqueName: \"kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683856 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.683891 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786478 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786532 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pgl8\" (UniqueName: \"kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786673 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.786707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.787375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.787792 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.787844 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.788509 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.793129 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.793893 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.793978 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.794494 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.794551 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.805806 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pgl8\" (UniqueName: \"kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8\") pod \"ceilometer-0\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " pod="openstack/ceilometer-0" Nov 22 07:49:40 crc kubenswrapper[4853]: I1122 07:49:40.934929 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.298872 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.299213 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.470809 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.673456 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.764890 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01047ee7-2bc8-487e-a7f2-8696bd86fd13" path="/var/lib/kubelet/pods/01047ee7-2bc8-487e-a7f2-8696bd86fd13/volumes" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.944644 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:49:41 crc kubenswrapper[4853]: I1122 07:49:41.944729 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.042746 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.126018 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vc57\" (UniqueName: \"kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57\") pod \"41c890fc-832a-4ab4-ad0f-5f41153efa12\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.126106 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data\") pod \"41c890fc-832a-4ab4-ad0f-5f41153efa12\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.126416 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts\") pod \"41c890fc-832a-4ab4-ad0f-5f41153efa12\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.126464 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle\") pod \"41c890fc-832a-4ab4-ad0f-5f41153efa12\" (UID: \"41c890fc-832a-4ab4-ad0f-5f41153efa12\") " Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.137368 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57" (OuterVolumeSpecName: "kube-api-access-9vc57") pod "41c890fc-832a-4ab4-ad0f-5f41153efa12" (UID: "41c890fc-832a-4ab4-ad0f-5f41153efa12"). InnerVolumeSpecName "kube-api-access-9vc57". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.157450 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts" (OuterVolumeSpecName: "scripts") pod "41c890fc-832a-4ab4-ad0f-5f41153efa12" (UID: "41c890fc-832a-4ab4-ad0f-5f41153efa12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.177980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data" (OuterVolumeSpecName: "config-data") pod "41c890fc-832a-4ab4-ad0f-5f41153efa12" (UID: "41c890fc-832a-4ab4-ad0f-5f41153efa12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.207996 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41c890fc-832a-4ab4-ad0f-5f41153efa12" (UID: "41c890fc-832a-4ab4-ad0f-5f41153efa12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.231020 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vc57\" (UniqueName: \"kubernetes.io/projected/41c890fc-832a-4ab4-ad0f-5f41153efa12-kube-api-access-9vc57\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.231053 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.231067 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.231078 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41c890fc-832a-4ab4-ad0f-5f41153efa12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.383067 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.383090 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.412951 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerStarted","Data":"f96dc948cc7abf5766b30c671b2db3a39b1a8f7e0ac95fa25aec1a2dd147b7f2"} Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.415766 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-blcfh" event={"ID":"41c890fc-832a-4ab4-ad0f-5f41153efa12","Type":"ContainerDied","Data":"b0648f984bb08781c1b09d7c72a20ed5b089d39e0c8ac9a786c35d44edbb8e62"} Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.415801 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0648f984bb08781c1b09d7c72a20ed5b089d39e0c8ac9a786c35d44edbb8e62" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.415867 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-blcfh" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.605607 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.605948 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-log" containerID="cri-o://c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c" gracePeriod=30 Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.606632 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-api" containerID="cri-o://cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10" gracePeriod=30 Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.708700 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.709220 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-log" containerID="cri-o://e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111" gracePeriod=30 Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.709348 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-metadata" containerID="cri-o://4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359" gracePeriod=30 Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.741028 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": EOF" Nov 22 07:49:42 crc kubenswrapper[4853]: I1122 07:49:42.741082 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": EOF" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.226519 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:43 crc kubenswrapper[4853]: E1122 07:49:43.227265 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c890fc-832a-4ab4-ad0f-5f41153efa12" containerName="nova-cell1-conductor-db-sync" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.227290 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c890fc-832a-4ab4-ad0f-5f41153efa12" containerName="nova-cell1-conductor-db-sync" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.227667 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c890fc-832a-4ab4-ad0f-5f41153efa12" containerName="nova-cell1-conductor-db-sync" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.229074 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.237288 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.262807 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.385402 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhn4s\" (UniqueName: \"kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.385916 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.386137 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.434850 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerID="c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c" exitCode=143 Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.434949 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerDied","Data":"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c"} Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.437525 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerStarted","Data":"bf24d65d111dafc6898de8fcdb1b0927cda1b5536869c8972c8d94476df9f19d"} Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.442815 4853 generic.go:334] "Generic (PLEG): container finished" podID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerID="e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111" exitCode=143 Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.442877 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerDied","Data":"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111"} Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.488333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhn4s\" (UniqueName: \"kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.488712 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.488789 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.495392 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.498006 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.511872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhn4s\" (UniqueName: \"kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s\") pod \"nova-cell1-conductor-0\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:43 crc kubenswrapper[4853]: I1122 07:49:43.673315 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.063838 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-l7f52"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.082194 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c78d4ccd7-pvf4q" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.090087 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ce61-account-create-qm8lf"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.126952 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.171962 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-l7f52"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.207134 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ce61-account-create-qm8lf"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.227994 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-q4jsn"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.266293 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.269359 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-788f5f9d9b-xptsh" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-api" containerID="cri-o://9802cd6c826da0ea11aa1ae79ac99b721e6b1b46faba4d37eab52a33a3957907" gracePeriod=30 Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.269801 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-788f5f9d9b-xptsh" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-httpd" containerID="cri-o://8e402bba5063452336a420c74fd7026f9c23745dbe2a7f14a8f11f4f18d9b651" gracePeriod=30 Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.471198 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerStarted","Data":"c926df9b437e4dbae424c1c7235a7e09d2c81b10315e108a9072d6ae44e863b1"} Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.523742 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-c7fbj"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.538298 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.572108 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-c7fbj"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.639657 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrn49\" (UniqueName: \"kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.639962 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.711289 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.744206 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrn49\" (UniqueName: \"kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.744558 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.745833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.801880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrn49\" (UniqueName: \"kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49\") pod \"aodh-db-create-c7fbj\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.812838 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-4e8a-account-create-jjgdk"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.815309 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.821248 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.823191 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4e8a-account-create-jjgdk"] Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.885474 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.954778 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:44 crc kubenswrapper[4853]: I1122 07:49:44.955002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcmvm\" (UniqueName: \"kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.057659 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.058929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcmvm\" (UniqueName: \"kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.060504 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.088833 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcmvm\" (UniqueName: \"kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm\") pod \"aodh-4e8a-account-create-jjgdk\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.175263 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.591862 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d1ae71ec-04ce-4d8b-9504-c8d122fce19b","Type":"ContainerStarted","Data":"c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b"} Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.592238 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d1ae71ec-04ce-4d8b-9504-c8d122fce19b","Type":"ContainerStarted","Data":"6903e1ca81185fbeab641556212c68f3b69854088df5ba1c46927cc183d6a464"} Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.593743 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.604395 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerStarted","Data":"0ca5bf51a3dde6d13175f040fb48852294dce160cad938dc801d6e702be765f4"} Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.613354 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6133290909999998 podStartE2EDuration="2.613329091s" podCreationTimestamp="2025-11-22 07:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:45.612661844 +0000 UTC m=+2384.453284470" watchObservedRunningTime="2025-11-22 07:49:45.613329091 +0000 UTC m=+2384.453951717" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.613929 4853 generic.go:334] "Generic (PLEG): container finished" podID="7115134e-ff99-44c2-b331-325661bf93a5" containerID="8e402bba5063452336a420c74fd7026f9c23745dbe2a7f14a8f11f4f18d9b651" exitCode=0 Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.613999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerDied","Data":"8e402bba5063452336a420c74fd7026f9c23745dbe2a7f14a8f11f4f18d9b651"} Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.710942 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-c7fbj"] Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.816169 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015c5e49-7907-4c6c-a3b3-7416c2bdefad" path="/var/lib/kubelet/pods/015c5e49-7907-4c6c-a3b3-7416c2bdefad/volumes" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.817025 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd" path="/var/lib/kubelet/pods/3c8b1ce5-5a35-4791-bc46-6c347d8bd3bd/volumes" Nov 22 07:49:45 crc kubenswrapper[4853]: I1122 07:49:45.818017 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c6486e-03d6-4215-831c-c87eac890517" path="/var/lib/kubelet/pods/67c6486e-03d6-4215-831c-c87eac890517/volumes" Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.019897 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-4e8a-account-create-jjgdk"] Nov 22 07:49:46 crc kubenswrapper[4853]: E1122 07:49:46.615659 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd23544dc_a7ee_4c28_8c1f_8d2faeaed66d.slice/crio-968c3a1f5258b8690034ea040c86d74876232989b4ae9da48f84786963210af2.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.651249 4853 generic.go:334] "Generic (PLEG): container finished" podID="d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" containerID="968c3a1f5258b8690034ea040c86d74876232989b4ae9da48f84786963210af2" exitCode=0 Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.651405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-c7fbj" event={"ID":"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d","Type":"ContainerDied","Data":"968c3a1f5258b8690034ea040c86d74876232989b4ae9da48f84786963210af2"} Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.651440 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-c7fbj" event={"ID":"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d","Type":"ContainerStarted","Data":"ccfaa8eaa19de6542503a6ec7dabbd83e776b3ad420f68a4225b6c96d1f8f533"} Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.663705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4e8a-account-create-jjgdk" event={"ID":"c6f5b166-148f-4c68-b444-40babca8ba03","Type":"ContainerStarted","Data":"1c79bd2ec6606952eab447e586e1ed425169fd030fa8dd3ae56c467638c0a2d0"} Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.663771 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4e8a-account-create-jjgdk" event={"ID":"c6f5b166-148f-4c68-b444-40babca8ba03","Type":"ContainerStarted","Data":"424305b40f672a800e4aa53b3288899255472da4f88198323579ac491937000a"} Nov 22 07:49:46 crc kubenswrapper[4853]: I1122 07:49:46.707081 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-4e8a-account-create-jjgdk" podStartSLOduration=2.70704628 podStartE2EDuration="2.70704628s" podCreationTimestamp="2025-11-22 07:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:46.687841302 +0000 UTC m=+2385.528463938" watchObservedRunningTime="2025-11-22 07:49:46.70704628 +0000 UTC m=+2385.547668906" Nov 22 07:49:47 crc kubenswrapper[4853]: I1122 07:49:47.675925 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6f5b166-148f-4c68-b444-40babca8ba03" containerID="1c79bd2ec6606952eab447e586e1ed425169fd030fa8dd3ae56c467638c0a2d0" exitCode=0 Nov 22 07:49:47 crc kubenswrapper[4853]: I1122 07:49:47.676043 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4e8a-account-create-jjgdk" event={"ID":"c6f5b166-148f-4c68-b444-40babca8ba03","Type":"ContainerDied","Data":"1c79bd2ec6606952eab447e586e1ed425169fd030fa8dd3ae56c467638c0a2d0"} Nov 22 07:49:47 crc kubenswrapper[4853]: I1122 07:49:47.681153 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerStarted","Data":"6fcf09fed7d170f91f448a5569c2217398d8426fb7673de287fbc1c865bbc0c6"} Nov 22 07:49:47 crc kubenswrapper[4853]: I1122 07:49:47.725297 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.927745937 podStartE2EDuration="7.725259072s" podCreationTimestamp="2025-11-22 07:49:40 +0000 UTC" firstStartedPulling="2025-11-22 07:49:41.492873088 +0000 UTC m=+2380.333495714" lastFinishedPulling="2025-11-22 07:49:46.290386223 +0000 UTC m=+2385.131008849" observedRunningTime="2025-11-22 07:49:47.717729829 +0000 UTC m=+2386.558352485" watchObservedRunningTime="2025-11-22 07:49:47.725259072 +0000 UTC m=+2386.565881748" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.259558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.409465 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts\") pod \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.410093 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrn49\" (UniqueName: \"kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49\") pod \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\" (UID: \"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d\") " Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.410274 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" (UID: "d23544dc-a7ee-4c28-8c1f-8d2faeaed66d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.411085 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.421689 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49" (OuterVolumeSpecName: "kube-api-access-wrn49") pod "d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" (UID: "d23544dc-a7ee-4c28-8c1f-8d2faeaed66d"). InnerVolumeSpecName "kube-api-access-wrn49". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.513852 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrn49\" (UniqueName: \"kubernetes.io/projected/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d-kube-api-access-wrn49\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.697384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-c7fbj" event={"ID":"d23544dc-a7ee-4c28-8c1f-8d2faeaed66d","Type":"ContainerDied","Data":"ccfaa8eaa19de6542503a6ec7dabbd83e776b3ad420f68a4225b6c96d1f8f533"} Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.697440 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfaa8eaa19de6542503a6ec7dabbd83e776b3ad420f68a4225b6c96d1f8f533" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.697438 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-c7fbj" Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.701231 4853 generic.go:334] "Generic (PLEG): container finished" podID="7115134e-ff99-44c2-b331-325661bf93a5" containerID="9802cd6c826da0ea11aa1ae79ac99b721e6b1b46faba4d37eab52a33a3957907" exitCode=0 Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.701586 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerDied","Data":"9802cd6c826da0ea11aa1ae79ac99b721e6b1b46faba4d37eab52a33a3957907"} Nov 22 07:49:48 crc kubenswrapper[4853]: I1122 07:49:48.701916 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.205451 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.347581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcmvm\" (UniqueName: \"kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm\") pod \"c6f5b166-148f-4c68-b444-40babca8ba03\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.348236 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts\") pod \"c6f5b166-148f-4c68-b444-40babca8ba03\" (UID: \"c6f5b166-148f-4c68-b444-40babca8ba03\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.349972 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6f5b166-148f-4c68-b444-40babca8ba03" (UID: "c6f5b166-148f-4c68-b444-40babca8ba03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.359021 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm" (OuterVolumeSpecName: "kube-api-access-bcmvm") pod "c6f5b166-148f-4c68-b444-40babca8ba03" (UID: "c6f5b166-148f-4c68-b444-40babca8ba03"). InnerVolumeSpecName "kube-api-access-bcmvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.452455 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcmvm\" (UniqueName: \"kubernetes.io/projected/c6f5b166-148f-4c68-b444-40babca8ba03-kube-api-access-bcmvm\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.452501 4853 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f5b166-148f-4c68-b444-40babca8ba03-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.718519 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-788f5f9d9b-xptsh" event={"ID":"7115134e-ff99-44c2-b331-325661bf93a5","Type":"ContainerDied","Data":"ea0c29c81af604c2806aa540e24fd88cc29515ac93f686dfc82ad0a3d28e9772"} Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.718889 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0c29c81af604c2806aa540e24fd88cc29515ac93f686dfc82ad0a3d28e9772" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.718562 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.720915 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-4e8a-account-create-jjgdk" event={"ID":"c6f5b166-148f-4c68-b444-40babca8ba03","Type":"ContainerDied","Data":"424305b40f672a800e4aa53b3288899255472da4f88198323579ac491937000a"} Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.720958 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424305b40f672a800e4aa53b3288899255472da4f88198323579ac491937000a" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.720990 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-4e8a-account-create-jjgdk" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.862336 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs\") pod \"7115134e-ff99-44c2-b331-325661bf93a5\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.862515 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config\") pod \"7115134e-ff99-44c2-b331-325661bf93a5\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.862599 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9snwq\" (UniqueName: \"kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq\") pod \"7115134e-ff99-44c2-b331-325661bf93a5\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.862701 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config\") pod \"7115134e-ff99-44c2-b331-325661bf93a5\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.862924 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle\") pod \"7115134e-ff99-44c2-b331-325661bf93a5\" (UID: \"7115134e-ff99-44c2-b331-325661bf93a5\") " Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.872184 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq" (OuterVolumeSpecName: "kube-api-access-9snwq") pod "7115134e-ff99-44c2-b331-325661bf93a5" (UID: "7115134e-ff99-44c2-b331-325661bf93a5"). InnerVolumeSpecName "kube-api-access-9snwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.889990 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7115134e-ff99-44c2-b331-325661bf93a5" (UID: "7115134e-ff99-44c2-b331-325661bf93a5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.932522 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7115134e-ff99-44c2-b331-325661bf93a5" (UID: "7115134e-ff99-44c2-b331-325661bf93a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.946307 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config" (OuterVolumeSpecName: "config") pod "7115134e-ff99-44c2-b331-325661bf93a5" (UID: "7115134e-ff99-44c2-b331-325661bf93a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.969446 4853 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.969494 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9snwq\" (UniqueName: \"kubernetes.io/projected/7115134e-ff99-44c2-b331-325661bf93a5-kube-api-access-9snwq\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.969508 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.969516 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:49 crc kubenswrapper[4853]: I1122 07:49:49.984489 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7115134e-ff99-44c2-b331-325661bf93a5" (UID: "7115134e-ff99-44c2-b331-325661bf93a5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.071970 4853 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7115134e-ff99-44c2-b331-325661bf93a5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.293200 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.293276 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.383994 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.681170 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.733829 4853 generic.go:334] "Generic (PLEG): container finished" podID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerID="4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359" exitCode=0 Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.735892 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.735911 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerDied","Data":"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359"} Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.736082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8","Type":"ContainerDied","Data":"ad13b7854de5a67033118efd33364b5adc9cebe15eabda74f003a8e7a8841630"} Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.736117 4853 scope.go:117] "RemoveContainer" containerID="4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.736273 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-788f5f9d9b-xptsh" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.786648 4853 scope.go:117] "RemoveContainer" containerID="e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.791483 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.791642 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.791672 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.791731 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.791974 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t2vr\" (UniqueName: \"kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.794165 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs" (OuterVolumeSpecName: "logs") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.811665 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr" (OuterVolumeSpecName: "kube-api-access-9t2vr") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "kube-api-access-9t2vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.828093 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.840419 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.846359 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-788f5f9d9b-xptsh"] Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.898790 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data" (OuterVolumeSpecName: "config-data") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.900521 4853 scope.go:117] "RemoveContainer" containerID="4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359" Nov 22 07:49:50 crc kubenswrapper[4853]: E1122 07:49:50.901374 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359\": container with ID starting with 4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359 not found: ID does not exist" containerID="4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.901443 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359"} err="failed to get container status \"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359\": rpc error: code = NotFound desc = could not find container \"4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359\": container with ID starting with 4c9de0535df26f302f586d1468edbf61c50ef4fe04c314449eba3b5611590359 not found: ID does not exist" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.901482 4853 scope.go:117] "RemoveContainer" containerID="e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111" Nov 22 07:49:50 crc kubenswrapper[4853]: E1122 07:49:50.902036 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111\": container with ID starting with e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111 not found: ID does not exist" containerID="e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.902097 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111"} err="failed to get container status \"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111\": rpc error: code = NotFound desc = could not find container \"e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111\": container with ID starting with e6cd2f6ec7217e64f71c7cc3af43f39ac952576644c4c5b9464de1e5b1424111 not found: ID does not exist" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.914380 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") pod \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\" (UID: \"664e44b4-df62-4cb1-a7c3-4b8d99c92fc8\") " Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.920223 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: W1122 07:49:50.925479 4853 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8/volumes/kubernetes.io~secret/config-data Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.925532 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data" (OuterVolumeSpecName: "config-data") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.926694 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.926918 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.926949 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.926972 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t2vr\" (UniqueName: \"kubernetes.io/projected/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-kube-api-access-9t2vr\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.957117 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:50 crc kubenswrapper[4853]: I1122 07:49:50.972532 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" (UID: "664e44b4-df62-4cb1-a7c3-4b8d99c92fc8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.029540 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.090149 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.103023 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.124790 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125558 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f5b166-148f-4c68-b444-40babca8ba03" containerName="mariadb-account-create" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125589 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f5b166-148f-4c68-b444-40babca8ba03" containerName="mariadb-account-create" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125614 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-api" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125626 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-api" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125651 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" containerName="mariadb-database-create" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125664 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" containerName="mariadb-database-create" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125690 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-log" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125700 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-log" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125720 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-metadata" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125729 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-metadata" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.125789 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-httpd" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.125798 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-httpd" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126088 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f5b166-148f-4c68-b444-40babca8ba03" containerName="mariadb-account-create" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126118 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-metadata" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126138 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" containerName="mariadb-database-create" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126151 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" containerName="nova-metadata-log" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126163 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-api" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.126177 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7115134e-ff99-44c2-b331-325661bf93a5" containerName="neutron-httpd" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.128171 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.133780 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.134314 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.150568 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.234745 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrmhl\" (UniqueName: \"kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.235287 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.235426 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.235517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.235629 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.343788 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.343854 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.343953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.344148 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrmhl\" (UniqueName: \"kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.344258 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.345582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.359829 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.360511 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.369357 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.372630 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrmhl\" (UniqueName: \"kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl\") pod \"nova-metadata-0\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.579127 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.733301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.773190 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664e44b4-df62-4cb1-a7c3-4b8d99c92fc8" path="/var/lib/kubelet/pods/664e44b4-df62-4cb1-a7c3-4b8d99c92fc8/volumes" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.774016 4853 generic.go:334] "Generic (PLEG): container finished" podID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerID="cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10" exitCode=0 Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.774277 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7115134e-ff99-44c2-b331-325661bf93a5" path="/var/lib/kubelet/pods/7115134e-ff99-44c2-b331-325661bf93a5/volumes" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.774858 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.784589 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerDied","Data":"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10"} Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.784657 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2cabe90-83e6-41ba-a457-c6a3ca299950","Type":"ContainerDied","Data":"a2368c4aca7c384595f2d27bb329e47c735819e3037353f2af641f6ea493c69e"} Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.784688 4853 scope.go:117] "RemoveContainer" containerID="cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.838935 4853 scope.go:117] "RemoveContainer" containerID="c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.869018 4853 scope.go:117] "RemoveContainer" containerID="cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10" Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.869773 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10\": container with ID starting with cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10 not found: ID does not exist" containerID="cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.869863 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10"} err="failed to get container status \"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10\": rpc error: code = NotFound desc = could not find container \"cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10\": container with ID starting with cde5d1c795a7935e17c16d5aa3febb4d3968badddf4f73695f0ab25691331d10 not found: ID does not exist" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.869909 4853 scope.go:117] "RemoveContainer" containerID="c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.870493 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle\") pod \"a2cabe90-83e6-41ba-a457-c6a3ca299950\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.870906 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs\") pod \"a2cabe90-83e6-41ba-a457-c6a3ca299950\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " Nov 22 07:49:51 crc kubenswrapper[4853]: E1122 07:49:51.871563 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c\": container with ID starting with c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c not found: ID does not exist" containerID="c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.871647 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c"} err="failed to get container status \"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c\": rpc error: code = NotFound desc = could not find container \"c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c\": container with ID starting with c8df245231d1c98a3fe308e20c0afb8409f6385e17667bfd30c49af17aa5623c not found: ID does not exist" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.872332 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs" (OuterVolumeSpecName: "logs") pod "a2cabe90-83e6-41ba-a457-c6a3ca299950" (UID: "a2cabe90-83e6-41ba-a457-c6a3ca299950"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.872438 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data\") pod \"a2cabe90-83e6-41ba-a457-c6a3ca299950\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.872989 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvcj\" (UniqueName: \"kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj\") pod \"a2cabe90-83e6-41ba-a457-c6a3ca299950\" (UID: \"a2cabe90-83e6-41ba-a457-c6a3ca299950\") " Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.876263 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2cabe90-83e6-41ba-a457-c6a3ca299950-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.880515 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj" (OuterVolumeSpecName: "kube-api-access-9qvcj") pod "a2cabe90-83e6-41ba-a457-c6a3ca299950" (UID: "a2cabe90-83e6-41ba-a457-c6a3ca299950"). InnerVolumeSpecName "kube-api-access-9qvcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.921717 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2cabe90-83e6-41ba-a457-c6a3ca299950" (UID: "a2cabe90-83e6-41ba-a457-c6a3ca299950"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.928035 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data" (OuterVolumeSpecName: "config-data") pod "a2cabe90-83e6-41ba-a457-c6a3ca299950" (UID: "a2cabe90-83e6-41ba-a457-c6a3ca299950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.980270 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.980322 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvcj\" (UniqueName: \"kubernetes.io/projected/a2cabe90-83e6-41ba-a457-c6a3ca299950-kube-api-access-9qvcj\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:51 crc kubenswrapper[4853]: I1122 07:49:51.980349 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2cabe90-83e6-41ba-a457-c6a3ca299950-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:52 crc kubenswrapper[4853]: W1122 07:49:52.093285 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ba80875_27e8_4986_97b0_83d81ae92204.slice/crio-9672d6c52d86250ffefa352947d98ea0e0822dbd364e0b6cd2f431263b730ccf WatchSource:0}: Error finding container 9672d6c52d86250ffefa352947d98ea0e0822dbd364e0b6cd2f431263b730ccf: Status 404 returned error can't find the container with id 9672d6c52d86250ffefa352947d98ea0e0822dbd364e0b6cd2f431263b730ccf Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.097047 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.129604 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.147235 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.162305 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:52 crc kubenswrapper[4853]: E1122 07:49:52.163070 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-api" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.163098 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-api" Nov 22 07:49:52 crc kubenswrapper[4853]: E1122 07:49:52.163115 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-log" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.163122 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-log" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.163455 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-log" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.163488 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" containerName="nova-api-api" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.165369 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.175308 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.220458 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.288312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6s4w\" (UniqueName: \"kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.288409 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.289103 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.289571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.392441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6s4w\" (UniqueName: \"kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.392560 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.392690 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.392890 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.393548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.400047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.402025 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.412631 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6s4w\" (UniqueName: \"kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w\") pod \"nova-api-0\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.620034 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.814176 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerStarted","Data":"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326"} Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.814621 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerStarted","Data":"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b"} Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.814638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerStarted","Data":"9672d6c52d86250ffefa352947d98ea0e0822dbd364e0b6cd2f431263b730ccf"} Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.819173 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58t9w" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="registry-server" containerID="cri-o://9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f" gracePeriod=2 Nov 22 07:49:52 crc kubenswrapper[4853]: I1122 07:49:52.867344 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.867310309 podStartE2EDuration="1.867310309s" podCreationTimestamp="2025-11-22 07:49:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:52.84235465 +0000 UTC m=+2391.682977276" watchObservedRunningTime="2025-11-22 07:49:52.867310309 +0000 UTC m=+2391.707932935" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.157595 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.512260 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.636010 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content\") pod \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.636324 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx7gz\" (UniqueName: \"kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz\") pod \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.636476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities\") pod \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\" (UID: \"70c5e5cc-15fb-41a4-b40d-8f770bae2182\") " Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.637258 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities" (OuterVolumeSpecName: "utilities") pod "70c5e5cc-15fb-41a4-b40d-8f770bae2182" (UID: "70c5e5cc-15fb-41a4-b40d-8f770bae2182"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.638455 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.642368 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz" (OuterVolumeSpecName: "kube-api-access-mx7gz") pod "70c5e5cc-15fb-41a4-b40d-8f770bae2182" (UID: "70c5e5cc-15fb-41a4-b40d-8f770bae2182"). InnerVolumeSpecName "kube-api-access-mx7gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.706249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70c5e5cc-15fb-41a4-b40d-8f770bae2182" (UID: "70c5e5cc-15fb-41a4-b40d-8f770bae2182"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.713528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.743700 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c5e5cc-15fb-41a4-b40d-8f770bae2182-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.743791 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx7gz\" (UniqueName: \"kubernetes.io/projected/70c5e5cc-15fb-41a4-b40d-8f770bae2182-kube-api-access-mx7gz\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.816887 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2cabe90-83e6-41ba-a457-c6a3ca299950" path="/var/lib/kubelet/pods/a2cabe90-83e6-41ba-a457-c6a3ca299950/volumes" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.840046 4853 generic.go:334] "Generic (PLEG): container finished" podID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerID="9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f" exitCode=0 Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.840138 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerDied","Data":"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f"} Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.840184 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58t9w" event={"ID":"70c5e5cc-15fb-41a4-b40d-8f770bae2182","Type":"ContainerDied","Data":"9484e7e5cc838a46180e6cc29cb5db16398888e23d51709a406c4fb08c64b834"} Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.840209 4853 scope.go:117] "RemoveContainer" containerID="9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.840452 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58t9w" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.849370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerStarted","Data":"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b"} Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.849439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerStarted","Data":"19c71bd9e8288278c0a888dbf5673b17db6a5998f39b25241a0bad267f4bf160"} Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.898032 4853 scope.go:117] "RemoveContainer" containerID="c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.903399 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.917003 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58t9w"] Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.932175 4853 scope.go:117] "RemoveContainer" containerID="8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.973359 4853 scope.go:117] "RemoveContainer" containerID="9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f" Nov 22 07:49:53 crc kubenswrapper[4853]: E1122 07:49:53.976766 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f\": container with ID starting with 9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f not found: ID does not exist" containerID="9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.976822 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f"} err="failed to get container status \"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f\": rpc error: code = NotFound desc = could not find container \"9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f\": container with ID starting with 9fbf42bd3a81db203f713656e777b72cee7e8c96581e2d874cdc0787325afb9f not found: ID does not exist" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.976859 4853 scope.go:117] "RemoveContainer" containerID="c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee" Nov 22 07:49:53 crc kubenswrapper[4853]: E1122 07:49:53.977536 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee\": container with ID starting with c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee not found: ID does not exist" containerID="c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.977581 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee"} err="failed to get container status \"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee\": rpc error: code = NotFound desc = could not find container \"c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee\": container with ID starting with c328b7dd484e540d55183cc7d9b9bfc4621e11e58c7828a401f170b1658befee not found: ID does not exist" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.977612 4853 scope.go:117] "RemoveContainer" containerID="8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736" Nov 22 07:49:53 crc kubenswrapper[4853]: E1122 07:49:53.978723 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736\": container with ID starting with 8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736 not found: ID does not exist" containerID="8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736" Nov 22 07:49:53 crc kubenswrapper[4853]: I1122 07:49:53.978767 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736"} err="failed to get container status \"8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736\": rpc error: code = NotFound desc = could not find container \"8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736\": container with ID starting with 8607a882d7f7cd4c997b2f97ad9bb3660172d7de785bd5a56e9d4544faae3736 not found: ID does not exist" Nov 22 07:49:54 crc kubenswrapper[4853]: I1122 07:49:54.867437 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerStarted","Data":"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26"} Nov 22 07:49:54 crc kubenswrapper[4853]: I1122 07:49:54.913598 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9135692669999997 podStartE2EDuration="2.913569267s" podCreationTimestamp="2025-11-22 07:49:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:49:54.909631792 +0000 UTC m=+2393.750254428" watchObservedRunningTime="2025-11-22 07:49:54.913569267 +0000 UTC m=+2393.754191893" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.070810 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-bk8mb"] Nov 22 07:49:55 crc kubenswrapper[4853]: E1122 07:49:55.071476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="extract-utilities" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.071497 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="extract-utilities" Nov 22 07:49:55 crc kubenswrapper[4853]: E1122 07:49:55.071516 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="registry-server" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.071523 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="registry-server" Nov 22 07:49:55 crc kubenswrapper[4853]: E1122 07:49:55.071541 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="extract-content" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.071547 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="extract-content" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.071848 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" containerName="registry-server" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.072862 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.075669 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.077541 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jm7rg" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.077834 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.077844 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.091419 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bk8mb"] Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.189045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.189108 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.189472 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.189780 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5v9\" (UniqueName: \"kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.292852 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5v9\" (UniqueName: \"kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.293109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.293145 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.293184 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.304880 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.305804 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.310026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.322615 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5v9\" (UniqueName: \"kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9\") pod \"aodh-db-sync-bk8mb\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.411552 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:49:55 crc kubenswrapper[4853]: I1122 07:49:55.778973 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c5e5cc-15fb-41a4-b40d-8f770bae2182" path="/var/lib/kubelet/pods/70c5e5cc-15fb-41a4-b40d-8f770bae2182/volumes" Nov 22 07:49:56 crc kubenswrapper[4853]: I1122 07:49:56.045358 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-bk8mb"] Nov 22 07:49:56 crc kubenswrapper[4853]: I1122 07:49:56.579495 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:49:56 crc kubenswrapper[4853]: I1122 07:49:56.579951 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:49:56 crc kubenswrapper[4853]: I1122 07:49:56.909533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bk8mb" event={"ID":"2538cfd0-3cda-47f6-83ef-c0fab178a95c","Type":"ContainerStarted","Data":"be143895c37f774114af6c11b30cb2d4388e288bbbfe98a7b59faa64170dc6b9"} Nov 22 07:49:57 crc kubenswrapper[4853]: I1122 07:49:57.927451 4853 generic.go:334] "Generic (PLEG): container finished" podID="1cc8da91-f334-4196-aa2f-191e55317490" containerID="8deaf242fe95930b41dd1a53aef0b8dd68204d09ede1a322ab27c05f44be1dac" exitCode=137 Nov 22 07:49:57 crc kubenswrapper[4853]: I1122 07:49:57.927553 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cc8da91-f334-4196-aa2f-191e55317490","Type":"ContainerDied","Data":"8deaf242fe95930b41dd1a53aef0b8dd68204d09ede1a322ab27c05f44be1dac"} Nov 22 07:49:57 crc kubenswrapper[4853]: I1122 07:49:57.930870 4853 generic.go:334] "Generic (PLEG): container finished" podID="3bacd6f9-077c-4dee-aeef-3b546162391b" containerID="1fca9e6b7a954fe50e4691944dff87a8fe48c7d5ac441dfd28dc0fec8f8c1571" exitCode=137 Nov 22 07:49:57 crc kubenswrapper[4853]: I1122 07:49:57.930916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3bacd6f9-077c-4dee-aeef-3b546162391b","Type":"ContainerDied","Data":"1fca9e6b7a954fe50e4691944dff87a8fe48c7d5ac441dfd28dc0fec8f8c1571"} Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.378659 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.389552 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.498889 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle\") pod \"1cc8da91-f334-4196-aa2f-191e55317490\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.499075 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvp2j\" (UniqueName: \"kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j\") pod \"1cc8da91-f334-4196-aa2f-191e55317490\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.499293 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data\") pod \"1cc8da91-f334-4196-aa2f-191e55317490\" (UID: \"1cc8da91-f334-4196-aa2f-191e55317490\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.499364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data\") pod \"3bacd6f9-077c-4dee-aeef-3b546162391b\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.499475 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2qm\" (UniqueName: \"kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm\") pod \"3bacd6f9-077c-4dee-aeef-3b546162391b\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.499582 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle\") pod \"3bacd6f9-077c-4dee-aeef-3b546162391b\" (UID: \"3bacd6f9-077c-4dee-aeef-3b546162391b\") " Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.509589 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j" (OuterVolumeSpecName: "kube-api-access-kvp2j") pod "1cc8da91-f334-4196-aa2f-191e55317490" (UID: "1cc8da91-f334-4196-aa2f-191e55317490"). InnerVolumeSpecName "kube-api-access-kvp2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.510582 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm" (OuterVolumeSpecName: "kube-api-access-9n2qm") pod "3bacd6f9-077c-4dee-aeef-3b546162391b" (UID: "3bacd6f9-077c-4dee-aeef-3b546162391b"). InnerVolumeSpecName "kube-api-access-9n2qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.541346 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bacd6f9-077c-4dee-aeef-3b546162391b" (UID: "3bacd6f9-077c-4dee-aeef-3b546162391b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.543655 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cc8da91-f334-4196-aa2f-191e55317490" (UID: "1cc8da91-f334-4196-aa2f-191e55317490"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.546482 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data" (OuterVolumeSpecName: "config-data") pod "3bacd6f9-077c-4dee-aeef-3b546162391b" (UID: "3bacd6f9-077c-4dee-aeef-3b546162391b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.547895 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data" (OuterVolumeSpecName: "config-data") pod "1cc8da91-f334-4196-aa2f-191e55317490" (UID: "1cc8da91-f334-4196-aa2f-191e55317490"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.605921 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.605997 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.606010 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2qm\" (UniqueName: \"kubernetes.io/projected/3bacd6f9-077c-4dee-aeef-3b546162391b-kube-api-access-9n2qm\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.606022 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bacd6f9-077c-4dee-aeef-3b546162391b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.606032 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cc8da91-f334-4196-aa2f-191e55317490-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.606041 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvp2j\" (UniqueName: \"kubernetes.io/projected/1cc8da91-f334-4196-aa2f-191e55317490-kube-api-access-kvp2j\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.958339 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.958358 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3bacd6f9-077c-4dee-aeef-3b546162391b","Type":"ContainerDied","Data":"5a2291e7fb13c2a1b7b68a9370c1118d084c4683fabacdec750083222e596a8b"} Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.958444 4853 scope.go:117] "RemoveContainer" containerID="1fca9e6b7a954fe50e4691944dff87a8fe48c7d5ac441dfd28dc0fec8f8c1571" Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.966290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"1cc8da91-f334-4196-aa2f-191e55317490","Type":"ContainerDied","Data":"ecba6d7ab26a7cc1260709f72518be59f3dc013eef7d17573b897ab36d105f97"} Nov 22 07:49:58 crc kubenswrapper[4853]: I1122 07:49:58.966423 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.006937 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.027560 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.042942 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.067153 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.083649 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: E1122 07:49:59.084489 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cc8da91-f334-4196-aa2f-191e55317490" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.084522 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cc8da91-f334-4196-aa2f-191e55317490" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:59 crc kubenswrapper[4853]: E1122 07:49:59.084543 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bacd6f9-077c-4dee-aeef-3b546162391b" containerName="nova-scheduler-scheduler" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.084552 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bacd6f9-077c-4dee-aeef-3b546162391b" containerName="nova-scheduler-scheduler" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.085017 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bacd6f9-077c-4dee-aeef-3b546162391b" containerName="nova-scheduler-scheduler" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.085042 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cc8da91-f334-4196-aa2f-191e55317490" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.086260 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.089741 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.104374 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.118170 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.121419 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.125559 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.125674 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.125910 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.133318 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.223003 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjpdd\" (UniqueName: \"kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.223123 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.223782 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.223885 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.224002 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf74b\" (UniqueName: \"kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.224323 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.224779 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.224809 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328233 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjpdd\" (UniqueName: \"kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328462 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328502 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328527 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328568 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf74b\" (UniqueName: \"kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.328624 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.337414 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.338350 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.338518 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.343247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.344336 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.349190 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.354639 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf74b\" (UniqueName: \"kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b\") pod \"nova-cell1-novncproxy-0\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.355528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjpdd\" (UniqueName: \"kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd\") pod \"nova-scheduler-0\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.415522 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.448602 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.781069 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cc8da91-f334-4196-aa2f-191e55317490" path="/var/lib/kubelet/pods/1cc8da91-f334-4196-aa2f-191e55317490/volumes" Nov 22 07:49:59 crc kubenswrapper[4853]: I1122 07:49:59.782523 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bacd6f9-077c-4dee-aeef-3b546162391b" path="/var/lib/kubelet/pods/3bacd6f9-077c-4dee-aeef-3b546162391b/volumes" Nov 22 07:50:01 crc kubenswrapper[4853]: I1122 07:50:01.580180 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:50:01 crc kubenswrapper[4853]: I1122 07:50:01.580516 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:50:02 crc kubenswrapper[4853]: I1122 07:50:02.598061 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:02 crc kubenswrapper[4853]: I1122 07:50:02.598088 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.253:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:02 crc kubenswrapper[4853]: I1122 07:50:02.621284 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:02 crc kubenswrapper[4853]: I1122 07:50:02.621329 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.705105 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.705840 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.254:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.716980 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.719453 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" gracePeriod=30 Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.773336 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.790929 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.791445 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-log" containerID="cri-o://47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b" gracePeriod=30 Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.792942 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-api" containerID="cri-o://9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26" gracePeriod=30 Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.815882 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.816186 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-log" containerID="cri-o://2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b" gracePeriod=30 Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.816344 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-metadata" containerID="cri-o://ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326" gracePeriod=30 Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.846320 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.868675 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:03 crc kubenswrapper[4853]: I1122 07:50:03.881381 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerName="nova-cell1-conductor-conductor" containerID="cri-o://c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" gracePeriod=30 Nov 22 07:50:05 crc kubenswrapper[4853]: I1122 07:50:05.082747 4853 generic.go:334] "Generic (PLEG): container finished" podID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerID="47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b" exitCode=143 Nov 22 07:50:05 crc kubenswrapper[4853]: I1122 07:50:05.082832 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerDied","Data":"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b"} Nov 22 07:50:05 crc kubenswrapper[4853]: I1122 07:50:05.086217 4853 generic.go:334] "Generic (PLEG): container finished" podID="6ba80875-27e8-4986-97b0-83d81ae92204" containerID="2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b" exitCode=143 Nov 22 07:50:05 crc kubenswrapper[4853]: I1122 07:50:05.086273 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerDied","Data":"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b"} Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.330123 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.331639 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="proxy-httpd" containerID="cri-o://6fcf09fed7d170f91f448a5569c2217398d8426fb7673de287fbc1c865bbc0c6" gracePeriod=30 Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.331646 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="sg-core" containerID="cri-o://0ca5bf51a3dde6d13175f040fb48852294dce160cad938dc801d6e702be765f4" gracePeriod=30 Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.331720 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-notification-agent" containerID="cri-o://c926df9b437e4dbae424c1c7235a7e09d2c81b10315e108a9072d6ae44e863b1" gracePeriod=30 Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.333934 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-central-agent" containerID="cri-o://bf24d65d111dafc6898de8fcdb1b0927cda1b5536869c8972c8d94476df9f19d" gracePeriod=30 Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.346796 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.249:3000/\": read tcp 10.217.0.2:45642->10.217.0.249:3000: read: connection reset by peer" Nov 22 07:50:06 crc kubenswrapper[4853]: I1122 07:50:06.581849 4853 scope.go:117] "RemoveContainer" containerID="8deaf242fe95930b41dd1a53aef0b8dd68204d09ede1a322ab27c05f44be1dac" Nov 22 07:50:06 crc kubenswrapper[4853]: E1122 07:50:06.618917 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:06 crc kubenswrapper[4853]: E1122 07:50:06.642367 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:06 crc kubenswrapper[4853]: E1122 07:50:06.645281 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:06 crc kubenswrapper[4853]: E1122 07:50:06.645346 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerName="nova-cell0-conductor-conductor" Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.130891 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfebae54-7a3b-42db-9375-d885e95c124b" containerID="6fcf09fed7d170f91f448a5569c2217398d8426fb7673de287fbc1c865bbc0c6" exitCode=0 Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.131274 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfebae54-7a3b-42db-9375-d885e95c124b" containerID="0ca5bf51a3dde6d13175f040fb48852294dce160cad938dc801d6e702be765f4" exitCode=2 Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.130981 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerDied","Data":"6fcf09fed7d170f91f448a5569c2217398d8426fb7673de287fbc1c865bbc0c6"} Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.131390 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerDied","Data":"0ca5bf51a3dde6d13175f040fb48852294dce160cad938dc801d6e702be765f4"} Nov 22 07:50:07 crc kubenswrapper[4853]: W1122 07:50:07.134001 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e86b7d7_9438_4d4e_bd8e_163c25cf527a.slice/crio-f638b24fea304c59501a1db40fb71c42d84149fc7c96658631755a832a150820 WatchSource:0}: Error finding container f638b24fea304c59501a1db40fb71c42d84149fc7c96658631755a832a150820: Status 404 returned error can't find the container with id f638b24fea304c59501a1db40fb71c42d84149fc7c96658631755a832a150820 Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.143380 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:07 crc kubenswrapper[4853]: W1122 07:50:07.267927 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb93d7eb_5143_45e2_afd6_061392f78392.slice/crio-1c2fd2d877e0aadb47232b2a5da4e4b41b1679abbf3af7b544c49233747b1220 WatchSource:0}: Error finding container 1c2fd2d877e0aadb47232b2a5da4e4b41b1679abbf3af7b544c49233747b1220: Status 404 returned error can't find the container with id 1c2fd2d877e0aadb47232b2a5da4e4b41b1679abbf3af7b544c49233747b1220 Nov 22 07:50:07 crc kubenswrapper[4853]: I1122 07:50:07.296051 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.080691 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.152134 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db93d7eb-5143-45e2-afd6-061392f78392","Type":"ContainerStarted","Data":"e7cee258aad30cdb380dd19d0321fa25817561e84117229b59c5aaa479ff2ee1"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.152189 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db93d7eb-5143-45e2-afd6-061392f78392","Type":"ContainerStarted","Data":"1c2fd2d877e0aadb47232b2a5da4e4b41b1679abbf3af7b544c49233747b1220"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.152941 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="db93d7eb-5143-45e2-afd6-061392f78392" containerName="nova-scheduler-scheduler" containerID="cri-o://e7cee258aad30cdb380dd19d0321fa25817561e84117229b59c5aaa479ff2ee1" gracePeriod=30 Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.163661 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle\") pod \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.163735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sc4n\" (UniqueName: \"kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n\") pod \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.165565 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data\") pod \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\" (UID: \"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f\") " Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.168168 4853 generic.go:334] "Generic (PLEG): container finished" podID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" exitCode=0 Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.168304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f","Type":"ContainerDied","Data":"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.168351 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8ef6d468-e6fd-4064-8f59-6d63c5d45e1f","Type":"ContainerDied","Data":"c56b87aec77c4883ae90ad40032905ca446a503b5b66aa3be442189147b93f01"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.168381 4853 scope.go:117] "RemoveContainer" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.168712 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.178409 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=9.178370577 podStartE2EDuration="9.178370577s" podCreationTimestamp="2025-11-22 07:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:08.174936556 +0000 UTC m=+2407.015559182" watchObservedRunningTime="2025-11-22 07:50:08.178370577 +0000 UTC m=+2407.018993203" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.182585 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e86b7d7-9438-4d4e-bd8e-163c25cf527a","Type":"ContainerStarted","Data":"5b583cf1f52031dc4aa552e545eaf3ad12419332ce07ce842b18dc8401271888"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.182655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e86b7d7-9438-4d4e-bd8e-163c25cf527a","Type":"ContainerStarted","Data":"f638b24fea304c59501a1db40fb71c42d84149fc7c96658631755a832a150820"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.182739 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5b583cf1f52031dc4aa552e545eaf3ad12419332ce07ce842b18dc8401271888" gracePeriod=30 Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.194177 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n" (OuterVolumeSpecName: "kube-api-access-8sc4n") pod "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" (UID: "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f"). InnerVolumeSpecName "kube-api-access-8sc4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.206856 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfebae54-7a3b-42db-9375-d885e95c124b" containerID="bf24d65d111dafc6898de8fcdb1b0927cda1b5536869c8972c8d94476df9f19d" exitCode=0 Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.206938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerDied","Data":"bf24d65d111dafc6898de8fcdb1b0927cda1b5536869c8972c8d94476df9f19d"} Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.243607 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" (UID: "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.246096 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data" (OuterVolumeSpecName: "config-data") pod "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" (UID: "8ef6d468-e6fd-4064-8f59-6d63c5d45e1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.266047 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=9.266010166 podStartE2EDuration="9.266010166s" podCreationTimestamp="2025-11-22 07:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:08.254472667 +0000 UTC m=+2407.095095303" watchObservedRunningTime="2025-11-22 07:50:08.266010166 +0000 UTC m=+2407.106632792" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.270726 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.270804 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.270823 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sc4n\" (UniqueName: \"kubernetes.io/projected/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f-kube-api-access-8sc4n\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.345203 4853 scope.go:117] "RemoveContainer" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.346521 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df\": container with ID starting with 728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df not found: ID does not exist" containerID="728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.346583 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df"} err="failed to get container status \"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df\": rpc error: code = NotFound desc = could not find container \"728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df\": container with ID starting with 728d548bd77c4b94178de19b8f2870e466e03a51b63b1b26181a56dcc67766df not found: ID does not exist" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.529298 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.557971 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.577436 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.578294 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerName="nova-cell0-conductor-conductor" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.578324 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerName="nova-cell0-conductor-conductor" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.578687 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" containerName="nova-cell0-conductor-conductor" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.580028 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.588033 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.598375 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.681145 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.685736 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.685817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc9f4\" (UniqueName: \"kubernetes.io/projected/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-kube-api-access-bc9f4\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.685906 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.686169 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.689932 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:50:08 crc kubenswrapper[4853]: E1122 07:50:08.690019 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.788399 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.788680 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.788719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc9f4\" (UniqueName: \"kubernetes.io/projected/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-kube-api-access-bc9f4\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.799690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.802440 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.829548 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc9f4\" (UniqueName: \"kubernetes.io/projected/a136d57d-e1f7-46e6-a75e-67bdc93f93ee-kube-api-access-bc9f4\") pod \"nova-cell0-conductor-0\" (UID: \"a136d57d-e1f7-46e6-a75e-67bdc93f93ee\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:08 crc kubenswrapper[4853]: I1122 07:50:08.951148 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.229684 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.286789 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerDied","Data":"c926df9b437e4dbae424c1c7235a7e09d2c81b10315e108a9072d6ae44e863b1"} Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.286739 4853 generic.go:334] "Generic (PLEG): container finished" podID="bfebae54-7a3b-42db-9375-d885e95c124b" containerID="c926df9b437e4dbae424c1c7235a7e09d2c81b10315e108a9072d6ae44e863b1" exitCode=0 Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.292033 4853 generic.go:334] "Generic (PLEG): container finished" podID="6ba80875-27e8-4986-97b0-83d81ae92204" containerID="ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326" exitCode=0 Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.292193 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.293254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerDied","Data":"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326"} Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.293326 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6ba80875-27e8-4986-97b0-83d81ae92204","Type":"ContainerDied","Data":"9672d6c52d86250ffefa352947d98ea0e0822dbd364e0b6cd2f431263b730ccf"} Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.293357 4853 scope.go:117] "RemoveContainer" containerID="ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.302617 4853 generic.go:334] "Generic (PLEG): container finished" podID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerID="c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" exitCode=0 Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.302679 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d1ae71ec-04ce-4d8b-9504-c8d122fce19b","Type":"ContainerDied","Data":"c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b"} Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.303458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrmhl\" (UniqueName: \"kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl\") pod \"6ba80875-27e8-4986-97b0-83d81ae92204\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.304862 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs\") pod \"6ba80875-27e8-4986-97b0-83d81ae92204\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.305034 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs\") pod \"6ba80875-27e8-4986-97b0-83d81ae92204\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.305115 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle\") pod \"6ba80875-27e8-4986-97b0-83d81ae92204\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.305163 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data\") pod \"6ba80875-27e8-4986-97b0-83d81ae92204\" (UID: \"6ba80875-27e8-4986-97b0-83d81ae92204\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.305640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs" (OuterVolumeSpecName: "logs") pod "6ba80875-27e8-4986-97b0-83d81ae92204" (UID: "6ba80875-27e8-4986-97b0-83d81ae92204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.306447 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ba80875-27e8-4986-97b0-83d81ae92204-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.327537 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl" (OuterVolumeSpecName: "kube-api-access-nrmhl") pod "6ba80875-27e8-4986-97b0-83d81ae92204" (UID: "6ba80875-27e8-4986-97b0-83d81ae92204"). InnerVolumeSpecName "kube-api-access-nrmhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.352566 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ba80875-27e8-4986-97b0-83d81ae92204" (UID: "6ba80875-27e8-4986-97b0-83d81ae92204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.373467 4853 scope.go:117] "RemoveContainer" containerID="2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.391371 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data" (OuterVolumeSpecName: "config-data") pod "6ba80875-27e8-4986-97b0-83d81ae92204" (UID: "6ba80875-27e8-4986-97b0-83d81ae92204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.413363 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.413413 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.413422 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrmhl\" (UniqueName: \"kubernetes.io/projected/6ba80875-27e8-4986-97b0-83d81ae92204-kube-api-access-nrmhl\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.414100 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6ba80875-27e8-4986-97b0-83d81ae92204" (UID: "6ba80875-27e8-4986-97b0-83d81ae92204"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.414619 4853 scope.go:117] "RemoveContainer" containerID="ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326" Nov 22 07:50:09 crc kubenswrapper[4853]: E1122 07:50:09.415155 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326\": container with ID starting with ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326 not found: ID does not exist" containerID="ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.415207 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326"} err="failed to get container status \"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326\": rpc error: code = NotFound desc = could not find container \"ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326\": container with ID starting with ef40a486cb8f0621f5ba609aa154cca5ca66093ac492593ca63b07b66b9cb326 not found: ID does not exist" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.415237 4853 scope.go:117] "RemoveContainer" containerID="2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b" Nov 22 07:50:09 crc kubenswrapper[4853]: E1122 07:50:09.415473 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b\": container with ID starting with 2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b not found: ID does not exist" containerID="2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.415507 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b"} err="failed to get container status \"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b\": rpc error: code = NotFound desc = could not find container \"2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b\": container with ID starting with 2e35c5c14c2ad2d88d8c86ebc10bec998bd8075651a93f4b1ae9073c53998a2b not found: ID does not exist" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.415630 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.452191 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.515959 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ba80875-27e8-4986-97b0-83d81ae92204-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.602046 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.829646 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef6d468-e6fd-4064-8f59-6d63c5d45e1f" path="/var/lib/kubelet/pods/8ef6d468-e6fd-4064-8f59-6d63c5d45e1f/volumes" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.870337 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.937908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhn4s\" (UniqueName: \"kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s\") pod \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.938071 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data\") pod \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.949658 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle\") pod \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\" (UID: \"d1ae71ec-04ce-4d8b-9504-c8d122fce19b\") " Nov 22 07:50:09 crc kubenswrapper[4853]: I1122 07:50:09.984853 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s" (OuterVolumeSpecName: "kube-api-access-zhn4s") pod "d1ae71ec-04ce-4d8b-9504-c8d122fce19b" (UID: "d1ae71ec-04ce-4d8b-9504-c8d122fce19b"). InnerVolumeSpecName "kube-api-access-zhn4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.004514 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.021343 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.044675 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.045639 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-log" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.045674 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-log" Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.045732 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.045744 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.045933 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-metadata" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.045958 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-metadata" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.046300 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-log" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.046334 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" containerName="nova-cell1-conductor-conductor" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.046363 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" containerName="nova-metadata-metadata" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.048358 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.054284 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhn4s\" (UniqueName: \"kubernetes.io/projected/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-kube-api-access-zhn4s\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.054326 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.058106 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.073080 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.091404 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ae71ec-04ce-4d8b-9504-c8d122fce19b" (UID: "d1ae71ec-04ce-4d8b-9504-c8d122fce19b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.156580 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.156678 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.156740 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.156993 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.157107 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4gk\" (UniqueName: \"kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.157233 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.197972 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data" (OuterVolumeSpecName: "config-data") pod "d1ae71ec-04ce-4d8b-9504-c8d122fce19b" (UID: "d1ae71ec-04ce-4d8b-9504-c8d122fce19b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.243886 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259113 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pgl8\" (UniqueName: \"kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259282 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259359 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259428 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259481 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259503 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259530 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd\") pod \"bfebae54-7a3b-42db-9375-d885e95c124b\" (UID: \"bfebae54-7a3b-42db-9375-d885e95c124b\") " Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.259994 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.260062 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4gk\" (UniqueName: \"kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.260126 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.260160 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.260186 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.260316 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ae71ec-04ce-4d8b-9504-c8d122fce19b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.261249 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.263396 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.266321 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.269117 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8" (OuterVolumeSpecName: "kube-api-access-5pgl8") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "kube-api-access-5pgl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.279886 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.282121 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts" (OuterVolumeSpecName: "scripts") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.290981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.295198 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.311524 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4gk\" (UniqueName: \"kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk\") pod \"nova-metadata-0\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.334295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.336287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a136d57d-e1f7-46e6-a75e-67bdc93f93ee","Type":"ContainerStarted","Data":"fc645e4aedd0cc9b8e31aabbf883e09e807987b2d4c35b55203da1a45597fa81"} Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.336353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"a136d57d-e1f7-46e6-a75e-67bdc93f93ee","Type":"ContainerStarted","Data":"fde9b48e866057ab38531b21b0b867262d920667c3744a04f66ef55c2c70bb3f"} Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.344532 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bk8mb" event={"ID":"2538cfd0-3cda-47f6-83ef-c0fab178a95c","Type":"ContainerStarted","Data":"f7d5cd861b3b67a74a8876e94a1aecaeaf9677879dfc1bfa17d9f86ded1f579a"} Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.347294 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d1ae71ec-04ce-4d8b-9504-c8d122fce19b","Type":"ContainerDied","Data":"6903e1ca81185fbeab641556212c68f3b69854088df5ba1c46927cc183d6a464"} Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.347352 4853 scope.go:117] "RemoveContainer" containerID="c0be6df4655fe16ce45e44b1c8cfd538aaddc58da568a1ec713abc3f7f12049b" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.347498 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.364583 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pgl8\" (UniqueName: \"kubernetes.io/projected/bfebae54-7a3b-42db-9375-d885e95c124b-kube-api-access-5pgl8\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.375242 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.375277 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.375316 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.375332 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfebae54-7a3b-42db-9375-d885e95c124b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.384832 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfebae54-7a3b-42db-9375-d885e95c124b","Type":"ContainerDied","Data":"f96dc948cc7abf5766b30c671b2db3a39b1a8f7e0ac95fa25aec1a2dd147b7f2"} Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.384922 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.398346 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-bk8mb" podStartSLOduration=2.719842294 podStartE2EDuration="15.39831388s" podCreationTimestamp="2025-11-22 07:49:55 +0000 UTC" firstStartedPulling="2025-11-22 07:49:56.055508171 +0000 UTC m=+2394.896130797" lastFinishedPulling="2025-11-22 07:50:08.733979767 +0000 UTC m=+2407.574602383" observedRunningTime="2025-11-22 07:50:10.375893409 +0000 UTC m=+2409.216516045" watchObservedRunningTime="2025-11-22 07:50:10.39831388 +0000 UTC m=+2409.238936506" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.437341 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.450999 4853 scope.go:117] "RemoveContainer" containerID="6fcf09fed7d170f91f448a5569c2217398d8426fb7673de287fbc1c865bbc0c6" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.465938 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.490945 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.491414 4853 scope.go:117] "RemoveContainer" containerID="0ca5bf51a3dde6d13175f040fb48852294dce160cad938dc801d6e702be765f4" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.500252 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.501382 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-notification-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501413 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-notification-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.501444 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="sg-core" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501454 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="sg-core" Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.501476 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-central-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501486 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-central-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: E1122 07:50:10.501499 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="proxy-httpd" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501509 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="proxy-httpd" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501881 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-central-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501908 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="sg-core" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501954 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="ceilometer-notification-agent" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.501974 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" containerName="proxy-httpd" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.503344 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.508095 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.514725 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.541715 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.551167 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.551336 4853 scope.go:117] "RemoveContainer" containerID="c926df9b437e4dbae424c1c7235a7e09d2c81b10315e108a9072d6ae44e863b1" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.566069 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data" (OuterVolumeSpecName: "config-data") pod "bfebae54-7a3b-42db-9375-d885e95c124b" (UID: "bfebae54-7a3b-42db-9375-d885e95c124b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.583633 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.584508 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n84q\" (UniqueName: \"kubernetes.io/projected/db91ab74-937e-4283-816a-1e31d662bc52-kube-api-access-7n84q\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.584688 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.587138 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.587170 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.587181 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfebae54-7a3b-42db-9375-d885e95c124b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.598507 4853 scope.go:117] "RemoveContainer" containerID="bf24d65d111dafc6898de8fcdb1b0927cda1b5536869c8972c8d94476df9f19d" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.689479 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.689544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n84q\" (UniqueName: \"kubernetes.io/projected/db91ab74-937e-4283-816a-1e31d662bc52-kube-api-access-7n84q\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.689572 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.695004 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.715578 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n84q\" (UniqueName: \"kubernetes.io/projected/db91ab74-937e-4283-816a-1e31d662bc52-kube-api-access-7n84q\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.717159 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db91ab74-937e-4283-816a-1e31d662bc52-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"db91ab74-937e-4283-816a-1e31d662bc52\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.848956 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.885199 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.909613 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.924851 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.931831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.934304 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.934961 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.935218 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:50:10 crc kubenswrapper[4853]: I1122 07:50:10.972439 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.005835 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.006243 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phdm\" (UniqueName: \"kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.006425 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.006888 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.006990 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.007156 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.007229 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.007262 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111435 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111566 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phdm\" (UniqueName: \"kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111631 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111750 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111817 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111878 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.111933 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.113408 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.113715 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.121623 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.141169 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.141847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.142050 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.143239 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.144320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phdm\" (UniqueName: \"kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm\") pod \"ceilometer-0\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.262233 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.393156 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.415321 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerStarted","Data":"dd4d06c8f04bccd9a2fba804004b37451d7d185a0c7f91df65c111bf82bc43d3"} Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.423373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.457694 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.457665889 podStartE2EDuration="3.457665889s" podCreationTimestamp="2025-11-22 07:50:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:11.438008412 +0000 UTC m=+2410.278631048" watchObservedRunningTime="2025-11-22 07:50:11.457665889 +0000 UTC m=+2410.298288515" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.492504 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.771143 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ba80875-27e8-4986-97b0-83d81ae92204" path="/var/lib/kubelet/pods/6ba80875-27e8-4986-97b0-83d81ae92204/volumes" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.772483 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfebae54-7a3b-42db-9375-d885e95c124b" path="/var/lib/kubelet/pods/bfebae54-7a3b-42db-9375-d885e95c124b/volumes" Nov 22 07:50:11 crc kubenswrapper[4853]: I1122 07:50:11.773464 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ae71ec-04ce-4d8b-9504-c8d122fce19b" path="/var/lib/kubelet/pods/d1ae71ec-04ce-4d8b-9504-c8d122fce19b/volumes" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.053230 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.419623 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.484111 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6s4w\" (UniqueName: \"kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w\") pod \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.484264 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle\") pod \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.484289 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data\") pod \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.484390 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs\") pod \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\" (UID: \"1414bd70-62c5-4ef7-a0c1-59652e6381a5\") " Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.485959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerStarted","Data":"0a5f98bb66c683c6e5fd3f8cf44a76de9c76f2de61f2384ef83f3bdc2c1ebfef"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.486004 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerStarted","Data":"4328a3695251afa7578238f22388565a6c26e4029e4791cea4cd9181c0f60790"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.486030 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs" (OuterVolumeSpecName: "logs") pod "1414bd70-62c5-4ef7-a0c1-59652e6381a5" (UID: "1414bd70-62c5-4ef7-a0c1-59652e6381a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.491804 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w" (OuterVolumeSpecName: "kube-api-access-g6s4w") pod "1414bd70-62c5-4ef7-a0c1-59652e6381a5" (UID: "1414bd70-62c5-4ef7-a0c1-59652e6381a5"). InnerVolumeSpecName "kube-api-access-g6s4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.491863 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerStarted","Data":"12c6914ec3f88f375de0819d8021f32ced4f56d582287365955a5e713beeab98"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.499204 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db91ab74-937e-4283-816a-1e31d662bc52","Type":"ContainerStarted","Data":"12dde8fe478b15881f896abb9ae459c2146f8db0865cc33c4a00cd5acfa0a19b"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.499267 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"db91ab74-937e-4283-816a-1e31d662bc52","Type":"ContainerStarted","Data":"44e9d7d9707bc16d1e06bb7be0f56712cf77f23272e59fefd283e3b5991c2a1f"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.500880 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.532887 4853 generic.go:334] "Generic (PLEG): container finished" podID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerID="9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26" exitCode=0 Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.533016 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.536613 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerDied","Data":"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.536711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1414bd70-62c5-4ef7-a0c1-59652e6381a5","Type":"ContainerDied","Data":"19c71bd9e8288278c0a888dbf5673b17db6a5998f39b25241a0bad267f4bf160"} Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.536732 4853 scope.go:117] "RemoveContainer" containerID="9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.547409 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data" (OuterVolumeSpecName: "config-data") pod "1414bd70-62c5-4ef7-a0c1-59652e6381a5" (UID: "1414bd70-62c5-4ef7-a0c1-59652e6381a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.568103 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.568068847 podStartE2EDuration="3.568068847s" podCreationTimestamp="2025-11-22 07:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:12.510505355 +0000 UTC m=+2411.351127981" watchObservedRunningTime="2025-11-22 07:50:12.568068847 +0000 UTC m=+2411.408691473" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.570203 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.570192505 podStartE2EDuration="2.570192505s" podCreationTimestamp="2025-11-22 07:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:12.539220035 +0000 UTC m=+2411.379842681" watchObservedRunningTime="2025-11-22 07:50:12.570192505 +0000 UTC m=+2411.410815131" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.572577 4853 scope.go:117] "RemoveContainer" containerID="47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.575383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1414bd70-62c5-4ef7-a0c1-59652e6381a5" (UID: "1414bd70-62c5-4ef7-a0c1-59652e6381a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.593333 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6s4w\" (UniqueName: \"kubernetes.io/projected/1414bd70-62c5-4ef7-a0c1-59652e6381a5-kube-api-access-g6s4w\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.593371 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.593382 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1414bd70-62c5-4ef7-a0c1-59652e6381a5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.593391 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1414bd70-62c5-4ef7-a0c1-59652e6381a5-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.603417 4853 scope.go:117] "RemoveContainer" containerID="9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26" Nov 22 07:50:12 crc kubenswrapper[4853]: E1122 07:50:12.604637 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26\": container with ID starting with 9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26 not found: ID does not exist" containerID="9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.604781 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26"} err="failed to get container status \"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26\": rpc error: code = NotFound desc = could not find container \"9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26\": container with ID starting with 9e978767c72f56bf3aa7a7803ccb7425125a268b156e696547a26b29444acd26 not found: ID does not exist" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.604877 4853 scope.go:117] "RemoveContainer" containerID="47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b" Nov 22 07:50:12 crc kubenswrapper[4853]: E1122 07:50:12.605542 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b\": container with ID starting with 47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b not found: ID does not exist" containerID="47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b" Nov 22 07:50:12 crc kubenswrapper[4853]: I1122 07:50:12.605583 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b"} err="failed to get container status \"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b\": rpc error: code = NotFound desc = could not find container \"47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b\": container with ID starting with 47772b6fd6c554e3a44bf5efb40eec964ac736b8c5b137b49744e646a2b1fa3b not found: ID does not exist" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.050327 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.080401 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.100522 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:13 crc kubenswrapper[4853]: E1122 07:50:13.101518 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-api" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.101539 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-api" Nov 22 07:50:13 crc kubenswrapper[4853]: E1122 07:50:13.101558 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-log" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.101565 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-log" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.101841 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-log" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.101877 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" containerName="nova-api-api" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.103501 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.106641 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.113551 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.213511 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kn9t\" (UniqueName: \"kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.213636 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.213983 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.214082 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.317588 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.317735 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.317842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.317993 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kn9t\" (UniqueName: \"kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.319407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.324769 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.337437 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.341261 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kn9t\" (UniqueName: \"kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t\") pod \"nova-api-0\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.460929 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.568664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerStarted","Data":"7a1adf972b661fce1b2a3d13ec7bd9d732459547292ce63f3622b5910014b9aa"} Nov 22 07:50:13 crc kubenswrapper[4853]: I1122 07:50:13.765501 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1414bd70-62c5-4ef7-a0c1-59652e6381a5" path="/var/lib/kubelet/pods/1414bd70-62c5-4ef7-a0c1-59652e6381a5/volumes" Nov 22 07:50:14 crc kubenswrapper[4853]: I1122 07:50:14.019422 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:14 crc kubenswrapper[4853]: W1122 07:50:14.049614 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71315011_5305_45ee_9dc6_91bdbc93560f.slice/crio-4aaec5fc9f846365e7333931da9d77a14141f7cd6dd7f84f6fb594c73c3133e0 WatchSource:0}: Error finding container 4aaec5fc9f846365e7333931da9d77a14141f7cd6dd7f84f6fb594c73c3133e0: Status 404 returned error can't find the container with id 4aaec5fc9f846365e7333931da9d77a14141f7cd6dd7f84f6fb594c73c3133e0 Nov 22 07:50:14 crc kubenswrapper[4853]: I1122 07:50:14.588042 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerStarted","Data":"4aaec5fc9f846365e7333931da9d77a14141f7cd6dd7f84f6fb594c73c3133e0"} Nov 22 07:50:15 crc kubenswrapper[4853]: I1122 07:50:15.542640 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:50:15 crc kubenswrapper[4853]: I1122 07:50:15.542999 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:50:16 crc kubenswrapper[4853]: I1122 07:50:16.613992 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerStarted","Data":"1e525293ec4c3b8e0d09679112c1e91ec67deeb716e0f04a210b7dc39c5ca8c2"} Nov 22 07:50:18 crc kubenswrapper[4853]: I1122 07:50:18.638218 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerStarted","Data":"c137700cdd13fb531ad9af7adc75f764e1edf3c8e47847347dc717cf7e8ba16e"} Nov 22 07:50:18 crc kubenswrapper[4853]: I1122 07:50:18.666310 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.666285947 podStartE2EDuration="5.666285947s" podCreationTimestamp="2025-11-22 07:50:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:18.66343613 +0000 UTC m=+2417.504058786" watchObservedRunningTime="2025-11-22 07:50:18.666285947 +0000 UTC m=+2417.506908573" Nov 22 07:50:18 crc kubenswrapper[4853]: I1122 07:50:18.989048 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 07:50:20 crc kubenswrapper[4853]: I1122 07:50:20.543661 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:50:20 crc kubenswrapper[4853]: I1122 07:50:20.544261 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:50:20 crc kubenswrapper[4853]: I1122 07:50:20.674257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerStarted","Data":"3fdc1a5bed598e8976ce18d856c226daedb84bce63889b0d8fab5f05d8d66b47"} Nov 22 07:50:20 crc kubenswrapper[4853]: I1122 07:50:20.894178 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 07:50:21 crc kubenswrapper[4853]: I1122 07:50:21.561189 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:21 crc kubenswrapper[4853]: I1122 07:50:21.561247 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:23 crc kubenswrapper[4853]: I1122 07:50:23.461957 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:23 crc kubenswrapper[4853]: I1122 07:50:23.464556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:24 crc kubenswrapper[4853]: I1122 07:50:24.545996 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:24 crc kubenswrapper[4853]: I1122 07:50:24.546018 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.6:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:25 crc kubenswrapper[4853]: I1122 07:50:25.781650 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerStarted","Data":"dd43241a0d8871801b6e9248211553a522c06b91f35e2ef4dbef358c0e2531f5"} Nov 22 07:50:30 crc kubenswrapper[4853]: I1122 07:50:30.553546 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:50:30 crc kubenswrapper[4853]: I1122 07:50:30.554328 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:50:30 crc kubenswrapper[4853]: I1122 07:50:30.563889 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:50:30 crc kubenswrapper[4853]: I1122 07:50:30.564243 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:50:31 crc kubenswrapper[4853]: I1122 07:50:31.297043 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:50:31 crc kubenswrapper[4853]: I1122 07:50:31.297128 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.465690 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.466406 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.466863 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.466917 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.469283 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.470777 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.775404 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.777831 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.812905 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.890778 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6sn\" (UniqueName: \"kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.890965 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.891045 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.891118 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.891251 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.891496 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.994685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.994836 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.994943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.995002 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.995099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.995202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz6sn\" (UniqueName: \"kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.996943 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.997063 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.997346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:33 crc kubenswrapper[4853]: I1122 07:50:33.997571 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:34 crc kubenswrapper[4853]: I1122 07:50:34.001047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:34 crc kubenswrapper[4853]: I1122 07:50:34.034415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz6sn\" (UniqueName: \"kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn\") pod \"dnsmasq-dns-6d99f6bc7f-97bjc\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:34 crc kubenswrapper[4853]: I1122 07:50:34.114671 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:36 crc kubenswrapper[4853]: I1122 07:50:36.002316 4853 scope.go:117] "RemoveContainer" containerID="0c5985e0d9cacb66de02c82c7902d79f05025df1c397660c39e6d493599897e8" Nov 22 07:50:36 crc kubenswrapper[4853]: I1122 07:50:36.281726 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:36 crc kubenswrapper[4853]: I1122 07:50:36.282329 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" containerID="cri-o://1e525293ec4c3b8e0d09679112c1e91ec67deeb716e0f04a210b7dc39c5ca8c2" gracePeriod=30 Nov 22 07:50:36 crc kubenswrapper[4853]: I1122 07:50:36.282424 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" containerID="cri-o://c137700cdd13fb531ad9af7adc75f764e1edf3c8e47847347dc717cf7e8ba16e" gracePeriod=30 Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.121635 4853 scope.go:117] "RemoveContainer" containerID="a8e46def419b0227cc97075609db3892a84c5f816682899f0cbfdcb44cc92483" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.458286 4853 scope.go:117] "RemoveContainer" containerID="30ab0495e2d9f69f354426b7dc389f20d3977878da8e9472df6281ab2eff70b7" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.504832 4853 scope.go:117] "RemoveContainer" containerID="c45a1959c0e414dafd26f5e73e4e601d528c37af4e01192bfb61c2212b349250" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.693498 4853 scope.go:117] "RemoveContainer" containerID="68afc55e3d57a420dba7215df55ae9c9bfd73b9c495a024107265c83a9e48dbd" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.717219 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.757237 4853 scope.go:117] "RemoveContainer" containerID="21963dd73268b70a39b031054ea8b79de2b654cb57d978be93e84267f53e0e9b" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.807697 4853 scope.go:117] "RemoveContainer" containerID="74b9c9ca7d062b54b108f2a57237fc92f12fedc9ec490728918a7f4e44519fdc" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.879989 4853 scope.go:117] "RemoveContainer" containerID="858d25750ba3ff9ba2a3753104d46f1cc3dc01dec156fc976aa58abfa2866e57" Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.924143 4853 generic.go:334] "Generic (PLEG): container finished" podID="71315011-5305-45ee-9dc6-91bdbc93560f" containerID="1e525293ec4c3b8e0d09679112c1e91ec67deeb716e0f04a210b7dc39c5ca8c2" exitCode=143 Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.924224 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerDied","Data":"1e525293ec4c3b8e0d09679112c1e91ec67deeb716e0f04a210b7dc39c5ca8c2"} Nov 22 07:50:37 crc kubenswrapper[4853]: I1122 07:50:37.926260 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerStarted","Data":"9299d064e9a29309a1f502edd581f553ac2be79e583d99c8cfa9f30877d096c7"} Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.082892 4853 scope.go:117] "RemoveContainer" containerID="6b38466e08bc82ab301d3bcd89270010578572c8ce2a09a55a55fe14de1dfcd6" Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.956520 4853 generic.go:334] "Generic (PLEG): container finished" podID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" containerID="5b583cf1f52031dc4aa552e545eaf3ad12419332ce07ce842b18dc8401271888" exitCode=137 Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.957999 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e86b7d7-9438-4d4e-bd8e-163c25cf527a","Type":"ContainerDied","Data":"5b583cf1f52031dc4aa552e545eaf3ad12419332ce07ce842b18dc8401271888"} Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.967338 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerStarted","Data":"58684fe89d5563d5a54db3adaeff006980bae1933d4f59db5002019d9431936f"} Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.971133 4853 generic.go:334] "Generic (PLEG): container finished" podID="2538cfd0-3cda-47f6-83ef-c0fab178a95c" containerID="f7d5cd861b3b67a74a8876e94a1aecaeaf9677879dfc1bfa17d9f86ded1f579a" exitCode=0 Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.971259 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bk8mb" event={"ID":"2538cfd0-3cda-47f6-83ef-c0fab178a95c","Type":"ContainerDied","Data":"f7d5cd861b3b67a74a8876e94a1aecaeaf9677879dfc1bfa17d9f86ded1f579a"} Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.978868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerStarted","Data":"e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf"} Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.979225 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.981374 4853 generic.go:334] "Generic (PLEG): container finished" podID="db93d7eb-5143-45e2-afd6-061392f78392" containerID="e7cee258aad30cdb380dd19d0321fa25817561e84117229b59c5aaa479ff2ee1" exitCode=137 Nov 22 07:50:38 crc kubenswrapper[4853]: I1122 07:50:38.981428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db93d7eb-5143-45e2-afd6-061392f78392","Type":"ContainerDied","Data":"e7cee258aad30cdb380dd19d0321fa25817561e84117229b59c5aaa479ff2ee1"} Nov 22 07:50:39 crc kubenswrapper[4853]: E1122 07:50:39.518257 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec41884a_acf4_414c_8150_c6ec04f8c6f2.slice/crio-e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:50:39 crc kubenswrapper[4853]: I1122 07:50:39.853361 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.78499828 podStartE2EDuration="29.853332759s" podCreationTimestamp="2025-11-22 07:50:10 +0000 UTC" firstStartedPulling="2025-11-22 07:50:12.05319827 +0000 UTC m=+2410.893820906" lastFinishedPulling="2025-11-22 07:50:37.121532759 +0000 UTC m=+2435.962155385" observedRunningTime="2025-11-22 07:50:39.022535315 +0000 UTC m=+2437.863157941" watchObservedRunningTime="2025-11-22 07:50:39.853332759 +0000 UTC m=+2438.693955385" Nov 22 07:50:39 crc kubenswrapper[4853]: I1122 07:50:39.863041 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.521589 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.603645 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle\") pod \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.603825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data\") pod \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.603997 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts\") pod \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.604172 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5v9\" (UniqueName: \"kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9\") pod \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\" (UID: \"2538cfd0-3cda-47f6-83ef-c0fab178a95c\") " Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.610608 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts" (OuterVolumeSpecName: "scripts") pod "2538cfd0-3cda-47f6-83ef-c0fab178a95c" (UID: "2538cfd0-3cda-47f6-83ef-c0fab178a95c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.610943 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9" (OuterVolumeSpecName: "kube-api-access-8x5v9") pod "2538cfd0-3cda-47f6-83ef-c0fab178a95c" (UID: "2538cfd0-3cda-47f6-83ef-c0fab178a95c"). InnerVolumeSpecName "kube-api-access-8x5v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.640701 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2538cfd0-3cda-47f6-83ef-c0fab178a95c" (UID: "2538cfd0-3cda-47f6-83ef-c0fab178a95c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.652768 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data" (OuterVolumeSpecName: "config-data") pod "2538cfd0-3cda-47f6-83ef-c0fab178a95c" (UID: "2538cfd0-3cda-47f6-83ef-c0fab178a95c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.707293 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.707356 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.707376 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x5v9\" (UniqueName: \"kubernetes.io/projected/2538cfd0-3cda-47f6-83ef-c0fab178a95c-kube-api-access-8x5v9\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:40 crc kubenswrapper[4853]: I1122 07:50:40.707392 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2538cfd0-3cda-47f6-83ef-c0fab178a95c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.007377 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-bk8mb" event={"ID":"2538cfd0-3cda-47f6-83ef-c0fab178a95c","Type":"ContainerDied","Data":"be143895c37f774114af6c11b30cb2d4388e288bbbfe98a7b59faa64170dc6b9"} Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.007828 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be143895c37f774114af6c11b30cb2d4388e288bbbfe98a7b59faa64170dc6b9" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.007501 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-bk8mb" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.008434 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="proxy-httpd" containerID="cri-o://e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf" gracePeriod=30 Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.008594 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="sg-core" containerID="cri-o://dd43241a0d8871801b6e9248211553a522c06b91f35e2ef4dbef358c0e2531f5" gracePeriod=30 Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.008673 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-notification-agent" containerID="cri-o://3fdc1a5bed598e8976ce18d856c226daedb84bce63889b0d8fab5f05d8d66b47" gracePeriod=30 Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.008905 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-central-agent" containerID="cri-o://7a1adf972b661fce1b2a3d13ec7bd9d732459547292ce63f3622b5910014b9aa" gracePeriod=30 Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.596629 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.637523 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data\") pod \"db93d7eb-5143-45e2-afd6-061392f78392\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.637696 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjpdd\" (UniqueName: \"kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd\") pod \"db93d7eb-5143-45e2-afd6-061392f78392\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.637864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle\") pod \"db93d7eb-5143-45e2-afd6-061392f78392\" (UID: \"db93d7eb-5143-45e2-afd6-061392f78392\") " Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.643964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd" (OuterVolumeSpecName: "kube-api-access-sjpdd") pod "db93d7eb-5143-45e2-afd6-061392f78392" (UID: "db93d7eb-5143-45e2-afd6-061392f78392"). InnerVolumeSpecName "kube-api-access-sjpdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.677136 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data" (OuterVolumeSpecName: "config-data") pod "db93d7eb-5143-45e2-afd6-061392f78392" (UID: "db93d7eb-5143-45e2-afd6-061392f78392"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.714219 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db93d7eb-5143-45e2-afd6-061392f78392" (UID: "db93d7eb-5143-45e2-afd6-061392f78392"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.742556 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjpdd\" (UniqueName: \"kubernetes.io/projected/db93d7eb-5143-45e2-afd6-061392f78392-kube-api-access-sjpdd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.742618 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:41 crc kubenswrapper[4853]: I1122 07:50:41.742633 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db93d7eb-5143-45e2-afd6-061392f78392-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.060821 4853 generic.go:334] "Generic (PLEG): container finished" podID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerID="58684fe89d5563d5a54db3adaeff006980bae1933d4f59db5002019d9431936f" exitCode=0 Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.060915 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerDied","Data":"58684fe89d5563d5a54db3adaeff006980bae1933d4f59db5002019d9431936f"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.081716 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerID="e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf" exitCode=0 Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.082114 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerID="dd43241a0d8871801b6e9248211553a522c06b91f35e2ef4dbef358c0e2531f5" exitCode=2 Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.082132 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerID="7a1adf972b661fce1b2a3d13ec7bd9d732459547292ce63f3622b5910014b9aa" exitCode=0 Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.081829 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerDied","Data":"e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.082255 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerDied","Data":"dd43241a0d8871801b6e9248211553a522c06b91f35e2ef4dbef358c0e2531f5"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.082269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerDied","Data":"7a1adf972b661fce1b2a3d13ec7bd9d732459547292ce63f3622b5910014b9aa"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.094258 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"db93d7eb-5143-45e2-afd6-061392f78392","Type":"ContainerDied","Data":"1c2fd2d877e0aadb47232b2a5da4e4b41b1679abbf3af7b544c49233747b1220"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.094328 4853 scope.go:117] "RemoveContainer" containerID="e7cee258aad30cdb380dd19d0321fa25817561e84117229b59c5aaa479ff2ee1" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.094533 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.100382 4853 generic.go:334] "Generic (PLEG): container finished" podID="71315011-5305-45ee-9dc6-91bdbc93560f" containerID="c137700cdd13fb531ad9af7adc75f764e1edf3c8e47847347dc717cf7e8ba16e" exitCode=0 Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.100428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerDied","Data":"c137700cdd13fb531ad9af7adc75f764e1edf3c8e47847347dc717cf7e8ba16e"} Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.245390 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.261326 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.285796 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.310115 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:42 crc kubenswrapper[4853]: E1122 07:50:42.310841 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.310871 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:50:42 crc kubenswrapper[4853]: E1122 07:50:42.310895 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db93d7eb-5143-45e2-afd6-061392f78392" containerName="nova-scheduler-scheduler" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.310903 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="db93d7eb-5143-45e2-afd6-061392f78392" containerName="nova-scheduler-scheduler" Nov 22 07:50:42 crc kubenswrapper[4853]: E1122 07:50:42.310938 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2538cfd0-3cda-47f6-83ef-c0fab178a95c" containerName="aodh-db-sync" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.310945 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2538cfd0-3cda-47f6-83ef-c0fab178a95c" containerName="aodh-db-sync" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.311164 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.311180 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="db93d7eb-5143-45e2-afd6-061392f78392" containerName="nova-scheduler-scheduler" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.311203 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2538cfd0-3cda-47f6-83ef-c0fab178a95c" containerName="aodh-db-sync" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.312230 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.326230 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.361357 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data\") pod \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.361629 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs\") pod \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.361709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs\") pod \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.361770 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle\") pod \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.361858 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf74b\" (UniqueName: \"kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b\") pod \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\" (UID: \"6e86b7d7-9438-4d4e-bd8e-163c25cf527a\") " Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.362394 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.362500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4mw\" (UniqueName: \"kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.362722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.368938 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.374331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b" (OuterVolumeSpecName: "kube-api-access-xf74b") pod "6e86b7d7-9438-4d4e-bd8e-163c25cf527a" (UID: "6e86b7d7-9438-4d4e-bd8e-163c25cf527a"). InnerVolumeSpecName "kube-api-access-xf74b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.405684 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e86b7d7-9438-4d4e-bd8e-163c25cf527a" (UID: "6e86b7d7-9438-4d4e-bd8e-163c25cf527a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.421720 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data" (OuterVolumeSpecName: "config-data") pod "6e86b7d7-9438-4d4e-bd8e-163c25cf527a" (UID: "6e86b7d7-9438-4d4e-bd8e-163c25cf527a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.444396 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "6e86b7d7-9438-4d4e-bd8e-163c25cf527a" (UID: "6e86b7d7-9438-4d4e-bd8e-163c25cf527a"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.451172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "6e86b7d7-9438-4d4e-bd8e-163c25cf527a" (UID: "6e86b7d7-9438-4d4e-bd8e-163c25cf527a"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.464723 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.464884 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.464974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h4mw\" (UniqueName: \"kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.465188 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf74b\" (UniqueName: \"kubernetes.io/projected/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-kube-api-access-xf74b\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.465201 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.465210 4853 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.465218 4853 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.465227 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e86b7d7-9438-4d4e-bd8e-163c25cf527a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.470320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.471230 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.486301 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h4mw\" (UniqueName: \"kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw\") pod \"nova-scheduler-0\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " pod="openstack/nova-scheduler-0" Nov 22 07:50:42 crc kubenswrapper[4853]: I1122 07:50:42.637132 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.117651 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6e86b7d7-9438-4d4e-bd8e-163c25cf527a","Type":"ContainerDied","Data":"f638b24fea304c59501a1db40fb71c42d84149fc7c96658631755a832a150820"} Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.117729 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.118271 4853 scope.go:117] "RemoveContainer" containerID="5b583cf1f52031dc4aa552e545eaf3ad12419332ce07ce842b18dc8401271888" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.174928 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.187506 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.206615 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.234228 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.236401 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.241475 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.241976 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.242779 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.259971 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.297319 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.297516 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.297804 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.297926 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.297979 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ql4\" (UniqueName: \"kubernetes.io/projected/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-kube-api-access-l2ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.400056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.400149 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.400234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.400325 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.400361 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2ql4\" (UniqueName: \"kubernetes.io/projected/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-kube-api-access-l2ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.408452 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.408481 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.408600 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.411282 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.419716 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2ql4\" (UniqueName: \"kubernetes.io/projected/85f3f206-5a15-4b98-8af2-4ef0a1ca123a-kube-api-access-l2ql4\") pod \"nova-cell1-novncproxy-0\" (UID: \"85f3f206-5a15-4b98-8af2-4ef0a1ca123a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.461837 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.6:8774/\": dial tcp 10.217.1.6:8774: connect: connection refused" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.461859 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.6:8774/\": dial tcp 10.217.1.6:8774: connect: connection refused" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.655783 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.763208 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e86b7d7-9438-4d4e-bd8e-163c25cf527a" path="/var/lib/kubelet/pods/6e86b7d7-9438-4d4e-bd8e-163c25cf527a/volumes" Nov 22 07:50:43 crc kubenswrapper[4853]: I1122 07:50:43.764188 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db93d7eb-5143-45e2-afd6-061392f78392" path="/var/lib/kubelet/pods/db93d7eb-5143-45e2-afd6-061392f78392/volumes" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.131313 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c79984de-ac53-48e9-b443-a5b7128315ef","Type":"ContainerStarted","Data":"4629a74ff08aa06bef759e0a1d107087d8982f47a589adb4bca716d4fb96258d"} Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.227896 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.762425 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.832998 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.846201 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.846366 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jm7rg" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.846555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.852606 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4n9\" (UniqueName: \"kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.852731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.852924 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.853029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.858786 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.958806 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb4n9\" (UniqueName: \"kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.958911 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.960140 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.960240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.988154 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.988402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.992806 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:44 crc kubenswrapper[4853]: I1122 07:50:44.997606 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb4n9\" (UniqueName: \"kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9\") pod \"aodh-0\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " pod="openstack/aodh-0" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.151505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c79984de-ac53-48e9-b443-a5b7128315ef","Type":"ContainerStarted","Data":"ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931"} Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.159071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85f3f206-5a15-4b98-8af2-4ef0a1ca123a","Type":"ContainerStarted","Data":"16496265b1aa307dc16e5f2caaf6201229eae7e27b3744eb94e95b379f14b6ba"} Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.159127 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"85f3f206-5a15-4b98-8af2-4ef0a1ca123a","Type":"ContainerStarted","Data":"036e8829f9a0f5c4ecf895b7f2580ec58092ea9ca46da82cd2d02f57815a8b0c"} Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.161841 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerStarted","Data":"0e14c3c05834d6313835abc55ebde8795a38b5a094dfb8b1553c60fdb5555ad0"} Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.162183 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.185166 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.1851135680000002 podStartE2EDuration="3.185113568s" podCreationTimestamp="2025-11-22 07:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:45.175718156 +0000 UTC m=+2444.016340782" watchObservedRunningTime="2025-11-22 07:50:45.185113568 +0000 UTC m=+2444.025736204" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.187902 4853 generic.go:334] "Generic (PLEG): container finished" podID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerID="3fdc1a5bed598e8976ce18d856c226daedb84bce63889b0d8fab5f05d8d66b47" exitCode=0 Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.187963 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerDied","Data":"3fdc1a5bed598e8976ce18d856c226daedb84bce63889b0d8fab5f05d8d66b47"} Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.195201 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.216725 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" podStartSLOduration=12.216697064 podStartE2EDuration="12.216697064s" podCreationTimestamp="2025-11-22 07:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:45.208010912 +0000 UTC m=+2444.048633558" watchObservedRunningTime="2025-11-22 07:50:45.216697064 +0000 UTC m=+2444.057319700" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.514174 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.692273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle\") pod \"71315011-5305-45ee-9dc6-91bdbc93560f\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.693001 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs\") pod \"71315011-5305-45ee-9dc6-91bdbc93560f\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.693192 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kn9t\" (UniqueName: \"kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t\") pod \"71315011-5305-45ee-9dc6-91bdbc93560f\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.693330 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data\") pod \"71315011-5305-45ee-9dc6-91bdbc93560f\" (UID: \"71315011-5305-45ee-9dc6-91bdbc93560f\") " Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.694325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs" (OuterVolumeSpecName: "logs") pod "71315011-5305-45ee-9dc6-91bdbc93560f" (UID: "71315011-5305-45ee-9dc6-91bdbc93560f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.694553 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71315011-5305-45ee-9dc6-91bdbc93560f-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.705774 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t" (OuterVolumeSpecName: "kube-api-access-6kn9t") pod "71315011-5305-45ee-9dc6-91bdbc93560f" (UID: "71315011-5305-45ee-9dc6-91bdbc93560f"). InnerVolumeSpecName "kube-api-access-6kn9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.745889 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data" (OuterVolumeSpecName: "config-data") pod "71315011-5305-45ee-9dc6-91bdbc93560f" (UID: "71315011-5305-45ee-9dc6-91bdbc93560f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.762400 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71315011-5305-45ee-9dc6-91bdbc93560f" (UID: "71315011-5305-45ee-9dc6-91bdbc93560f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.798095 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kn9t\" (UniqueName: \"kubernetes.io/projected/71315011-5305-45ee-9dc6-91bdbc93560f-kube-api-access-6kn9t\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.798136 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.798167 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71315011-5305-45ee-9dc6-91bdbc93560f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:45 crc kubenswrapper[4853]: I1122 07:50:45.974785 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.245395 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"71315011-5305-45ee-9dc6-91bdbc93560f","Type":"ContainerDied","Data":"4aaec5fc9f846365e7333931da9d77a14141f7cd6dd7f84f6fb594c73c3133e0"} Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.245996 4853 scope.go:117] "RemoveContainer" containerID="c137700cdd13fb531ad9af7adc75f764e1edf3c8e47847347dc717cf7e8ba16e" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.245646 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.255583 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerStarted","Data":"e6edc282efe45b2ed946954f20de03f92f37609c59095880f39932eabfafe48e"} Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.288998 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.288971441 podStartE2EDuration="3.288971441s" podCreationTimestamp="2025-11-22 07:50:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:46.276128507 +0000 UTC m=+2445.116751143" watchObservedRunningTime="2025-11-22 07:50:46.288971441 +0000 UTC m=+2445.129594067" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.353032 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.368807 4853 scope.go:117] "RemoveContainer" containerID="1e525293ec4c3b8e0d09679112c1e91ec67deeb716e0f04a210b7dc39c5ca8c2" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.370851 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.372163 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.396321 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.396987 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397008 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.397021 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397027 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.397042 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-central-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397050 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-central-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.397061 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="proxy-httpd" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397067 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="proxy-httpd" Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.397095 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="sg-core" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397102 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="sg-core" Nov 22 07:50:46 crc kubenswrapper[4853]: E1122 07:50:46.397117 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-notification-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397123 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-notification-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397383 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="proxy-httpd" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397394 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-log" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397411 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="sg-core" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397421 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-central-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397431 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" containerName="ceilometer-notification-agent" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.397447 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" containerName="nova-api-api" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.399278 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.403692 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.403913 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.407623 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.417908 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.542716 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543480 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543529 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9phdm\" (UniqueName: \"kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543562 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543732 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543782 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.543880 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs\") pod \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\" (UID: \"ec41884a-acf4-414c-8150-c6ec04f8c6f2\") " Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.544615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.544723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmpv\" (UniqueName: \"kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.545066 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.545105 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.545458 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.545489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.546870 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.547276 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.552641 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm" (OuterVolumeSpecName: "kube-api-access-9phdm") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "kube-api-access-9phdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.553291 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts" (OuterVolumeSpecName: "scripts") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.589734 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.618494 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.647927 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.647983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648185 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648219 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648262 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmpv\" (UniqueName: \"kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648410 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648425 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648435 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9phdm\" (UniqueName: \"kubernetes.io/projected/ec41884a-acf4-414c-8150-c6ec04f8c6f2-kube-api-access-9phdm\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648446 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648457 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.648465 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec41884a-acf4-414c-8150-c6ec04f8c6f2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.653169 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.654552 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.654671 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.654843 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.667690 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.668418 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.679367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmpv\" (UniqueName: \"kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv\") pod \"nova-api-0\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.728925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data" (OuterVolumeSpecName: "config-data") pod "ec41884a-acf4-414c-8150-c6ec04f8c6f2" (UID: "ec41884a-acf4-414c-8150-c6ec04f8c6f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.748213 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.750542 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:46 crc kubenswrapper[4853]: I1122 07:50:46.750578 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec41884a-acf4-414c-8150-c6ec04f8c6f2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.275040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec41884a-acf4-414c-8150-c6ec04f8c6f2","Type":"ContainerDied","Data":"12c6914ec3f88f375de0819d8021f32ced4f56d582287365955a5e713beeab98"} Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.275123 4853 scope.go:117] "RemoveContainer" containerID="e36d6909cda42d08d24fe24c2163a7f29b7930ab91c8310122953e671d417adf" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.275133 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.321473 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.369140 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.385822 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.391360 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.396718 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.397026 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.397157 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.443052 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.508818 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.596904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.597381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.597571 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.597698 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.598072 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8hf\" (UniqueName: \"kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.598408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.598725 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.599847 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.637702 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.703683 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.703835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm8hf\" (UniqueName: \"kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.703883 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.703921 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.704105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.704223 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.704318 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.704414 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.705626 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.706176 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.714710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.715477 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.716494 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.719181 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.728587 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.741522 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm8hf\" (UniqueName: \"kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf\") pod \"ceilometer-0\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.746394 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.764019 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71315011-5305-45ee-9dc6-91bdbc93560f" path="/var/lib/kubelet/pods/71315011-5305-45ee-9dc6-91bdbc93560f/volumes" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.764790 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec41884a-acf4-414c-8150-c6ec04f8c6f2" path="/var/lib/kubelet/pods/ec41884a-acf4-414c-8150-c6ec04f8c6f2/volumes" Nov 22 07:50:47 crc kubenswrapper[4853]: I1122 07:50:47.843392 4853 scope.go:117] "RemoveContainer" containerID="dd43241a0d8871801b6e9248211553a522c06b91f35e2ef4dbef358c0e2531f5" Nov 22 07:50:47 crc kubenswrapper[4853]: W1122 07:50:47.878782 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cce67c7_3f8d_4931_986a_5ff6db89e8c6.slice/crio-0878d1eec4f70d7d3113c4a32b157cd68d837e64096c1236c54f09d4ce16cc59 WatchSource:0}: Error finding container 0878d1eec4f70d7d3113c4a32b157cd68d837e64096c1236c54f09d4ce16cc59: Status 404 returned error can't find the container with id 0878d1eec4f70d7d3113c4a32b157cd68d837e64096c1236c54f09d4ce16cc59 Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.088385 4853 scope.go:117] "RemoveContainer" containerID="3fdc1a5bed598e8976ce18d856c226daedb84bce63889b0d8fab5f05d8d66b47" Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.148077 4853 scope.go:117] "RemoveContainer" containerID="7a1adf972b661fce1b2a3d13ec7bd9d732459547292ce63f3622b5910014b9aa" Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.348739 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerStarted","Data":"0878d1eec4f70d7d3113c4a32b157cd68d837e64096c1236c54f09d4ce16cc59"} Nov 22 07:50:48 crc kubenswrapper[4853]: W1122 07:50:48.453231 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafcc3ae8_8b9d_4c00_b2c2_c601accbe056.slice/crio-7064545bf7693ffa43a063e3ddf1e5efecb912fefbaeb97721e47dc37209d4ab WatchSource:0}: Error finding container 7064545bf7693ffa43a063e3ddf1e5efecb912fefbaeb97721e47dc37209d4ab: Status 404 returned error can't find the container with id 7064545bf7693ffa43a063e3ddf1e5efecb912fefbaeb97721e47dc37209d4ab Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.468554 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.631276 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 07:50:48 crc kubenswrapper[4853]: I1122 07:50:48.656250 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.118972 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.408290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerStarted","Data":"19817544de55f39096555895f24dbd6c2507c39adcdeef2f57827e5f888eeacd"} Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.408660 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.408939 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7877d89589-qms8z" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="dnsmasq-dns" containerID="cri-o://8715cc97b61666797a2cda87fef90d1b23e6b737f9886f86ac62307a4f22f3f9" gracePeriod=10 Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.418723 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerStarted","Data":"a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa"} Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.434321 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerStarted","Data":"7064545bf7693ffa43a063e3ddf1e5efecb912fefbaeb97721e47dc37209d4ab"} Nov 22 07:50:49 crc kubenswrapper[4853]: I1122 07:50:49.885375 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:50:50 crc kubenswrapper[4853]: I1122 07:50:50.450005 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerID="8715cc97b61666797a2cda87fef90d1b23e6b737f9886f86ac62307a4f22f3f9" exitCode=0 Nov 22 07:50:50 crc kubenswrapper[4853]: I1122 07:50:50.450459 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-qms8z" event={"ID":"d7e1b24e-7343-4816-8c6e-86c7af484d6f","Type":"ContainerDied","Data":"8715cc97b61666797a2cda87fef90d1b23e6b737f9886f86ac62307a4f22f3f9"} Nov 22 07:50:50 crc kubenswrapper[4853]: I1122 07:50:50.462919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerStarted","Data":"e25ecc34a2913fffaf0090f0eb190190dfe3748cfdb9fea1e3f28303301ef9b5"} Nov 22 07:50:50 crc kubenswrapper[4853]: I1122 07:50:50.530471 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7877d89589-qms8z" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.243:5353: connect: connection refused" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.083472 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.235820 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.235908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.236966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.237023 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.237090 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6qjd\" (UniqueName: \"kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.237115 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb\") pod \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\" (UID: \"d7e1b24e-7343-4816-8c6e-86c7af484d6f\") " Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.264126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd" (OuterVolumeSpecName: "kube-api-access-w6qjd") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "kube-api-access-w6qjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.315219 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.315289 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.328515 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.338451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.339690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config" (OuterVolumeSpecName: "config") pod "d7e1b24e-7343-4816-8c6e-86c7af484d6f" (UID: "d7e1b24e-7343-4816-8c6e-86c7af484d6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340194 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340231 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340245 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340257 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340265 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7e1b24e-7343-4816-8c6e-86c7af484d6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.340275 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6qjd\" (UniqueName: \"kubernetes.io/projected/d7e1b24e-7343-4816-8c6e-86c7af484d6f-kube-api-access-w6qjd\") on node \"crc\" DevicePath \"\"" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.478509 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7877d89589-qms8z" event={"ID":"d7e1b24e-7343-4816-8c6e-86c7af484d6f","Type":"ContainerDied","Data":"02a383a74e4b5e75caafb3d528ffc309d5867b0c4a516ec22bb92830506aa954"} Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.478627 4853 scope.go:117] "RemoveContainer" containerID="8715cc97b61666797a2cda87fef90d1b23e6b737f9886f86ac62307a4f22f3f9" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.478538 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7877d89589-qms8z" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.528136 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.528106257 podStartE2EDuration="5.528106257s" podCreationTimestamp="2025-11-22 07:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:51.51144056 +0000 UTC m=+2450.352063206" watchObservedRunningTime="2025-11-22 07:50:51.528106257 +0000 UTC m=+2450.368728883" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.549251 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.561203 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7877d89589-qms8z"] Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.594537 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.747388 4853 scope.go:117] "RemoveContainer" containerID="2f4e793383f2247cd9af43b859ee01347fae7620a53aa440c54b99aa68461752" Nov 22 07:50:51 crc kubenswrapper[4853]: I1122 07:50:51.763310 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" path="/var/lib/kubelet/pods/d7e1b24e-7343-4816-8c6e-86c7af484d6f/volumes" Nov 22 07:50:52 crc kubenswrapper[4853]: I1122 07:50:52.638344 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:50:52 crc kubenswrapper[4853]: I1122 07:50:52.685933 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:50:53 crc kubenswrapper[4853]: I1122 07:50:53.514045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerStarted","Data":"e63fdee1c185f6a6bad44a59e593d4e78cca5d9a5a37b8e309d865b6fa87ff11"} Nov 22 07:50:53 crc kubenswrapper[4853]: I1122 07:50:53.558003 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:50:53 crc kubenswrapper[4853]: I1122 07:50:53.656780 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:53 crc kubenswrapper[4853]: I1122 07:50:53.679566 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.548421 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.792528 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-2vkjv"] Nov 22 07:50:54 crc kubenswrapper[4853]: E1122 07:50:54.794958 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="dnsmasq-dns" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.794982 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="dnsmasq-dns" Nov 22 07:50:54 crc kubenswrapper[4853]: E1122 07:50:54.795020 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="init" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.795032 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="init" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.795348 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e1b24e-7343-4816-8c6e-86c7af484d6f" containerName="dnsmasq-dns" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.796355 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.798423 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.801510 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.817586 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2vkjv"] Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.942720 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.943006 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.943207 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4c7\" (UniqueName: \"kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:54 crc kubenswrapper[4853]: I1122 07:50:54.943381 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.045685 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.046074 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.046268 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.046441 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4c7\" (UniqueName: \"kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.053247 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.053377 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.054831 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.068675 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4c7\" (UniqueName: \"kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7\") pod \"nova-cell1-cell-mapping-2vkjv\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:55 crc kubenswrapper[4853]: I1122 07:50:55.137960 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:50:56 crc kubenswrapper[4853]: I1122 07:50:56.561654 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerStarted","Data":"a43cf04b95f9d219e2a4060e7caf7e5bf8f09db38cc645cb618c449ddf30aa74"} Nov 22 07:50:56 crc kubenswrapper[4853]: I1122 07:50:56.564814 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerStarted","Data":"0b3c3ed175688b10da46a60441edc272d82d13538ec20d2af50756c305f74227"} Nov 22 07:50:56 crc kubenswrapper[4853]: I1122 07:50:56.700140 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-2vkjv"] Nov 22 07:50:56 crc kubenswrapper[4853]: I1122 07:50:56.769484 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:56 crc kubenswrapper[4853]: I1122 07:50:56.770045 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:50:57 crc kubenswrapper[4853]: I1122 07:50:57.579940 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2vkjv" event={"ID":"66ddea4d-3125-44f0-8855-75935dc4b640","Type":"ContainerStarted","Data":"7c1a164720513825e952fb82423aca4793f6511c28309701ca502012b05992cb"} Nov 22 07:50:57 crc kubenswrapper[4853]: I1122 07:50:57.580139 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2vkjv" event={"ID":"66ddea4d-3125-44f0-8855-75935dc4b640","Type":"ContainerStarted","Data":"1f321e66ed2279b068897d0df57c2d202a30a4e3627abdcd28e70efd5a3c92c2"} Nov 22 07:50:57 crc kubenswrapper[4853]: I1122 07:50:57.600940 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-2vkjv" podStartSLOduration=3.600921355 podStartE2EDuration="3.600921355s" podCreationTimestamp="2025-11-22 07:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:50:57.597642457 +0000 UTC m=+2456.438265093" watchObservedRunningTime="2025-11-22 07:50:57.600921355 +0000 UTC m=+2456.441543981" Nov 22 07:50:57 crc kubenswrapper[4853]: I1122 07:50:57.785020 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:50:57 crc kubenswrapper[4853]: I1122 07:50:57.785023 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.11:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:51:01 crc kubenswrapper[4853]: I1122 07:51:01.296955 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:51:01 crc kubenswrapper[4853]: I1122 07:51:01.297407 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:51:01 crc kubenswrapper[4853]: I1122 07:51:01.632634 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerStarted","Data":"e92ed36a7e69dbfcbbac24fb88c0ba6a12cbb846fc80e738ded3a748f0d85c96"} Nov 22 07:51:04 crc kubenswrapper[4853]: I1122 07:51:04.688668 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerStarted","Data":"71ae5750a9bbcf0314c93aa2a6aeeac589c7931877e6a375aa06e511f19c6ec5"} Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.714486 4853 generic.go:334] "Generic (PLEG): container finished" podID="66ddea4d-3125-44f0-8855-75935dc4b640" containerID="7c1a164720513825e952fb82423aca4793f6511c28309701ca502012b05992cb" exitCode=0 Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.714576 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2vkjv" event={"ID":"66ddea4d-3125-44f0-8855-75935dc4b640","Type":"ContainerDied","Data":"7c1a164720513825e952fb82423aca4793f6511c28309701ca502012b05992cb"} Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.755690 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.755873 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.757346 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.757383 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.765024 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:51:06 crc kubenswrapper[4853]: I1122 07:51:06.778774 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.744544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-2vkjv" event={"ID":"66ddea4d-3125-44f0-8855-75935dc4b640","Type":"ContainerDied","Data":"1f321e66ed2279b068897d0df57c2d202a30a4e3627abdcd28e70efd5a3c92c2"} Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.745103 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f321e66ed2279b068897d0df57c2d202a30a4e3627abdcd28e70efd5a3c92c2" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.744819 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.813599 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts\") pod \"66ddea4d-3125-44f0-8855-75935dc4b640\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.813845 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data\") pod \"66ddea4d-3125-44f0-8855-75935dc4b640\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.814057 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle\") pod \"66ddea4d-3125-44f0-8855-75935dc4b640\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.814096 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4c7\" (UniqueName: \"kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7\") pod \"66ddea4d-3125-44f0-8855-75935dc4b640\" (UID: \"66ddea4d-3125-44f0-8855-75935dc4b640\") " Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.824998 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7" (OuterVolumeSpecName: "kube-api-access-ms4c7") pod "66ddea4d-3125-44f0-8855-75935dc4b640" (UID: "66ddea4d-3125-44f0-8855-75935dc4b640"). InnerVolumeSpecName "kube-api-access-ms4c7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.826641 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts" (OuterVolumeSpecName: "scripts") pod "66ddea4d-3125-44f0-8855-75935dc4b640" (UID: "66ddea4d-3125-44f0-8855-75935dc4b640"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.908512 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data" (OuterVolumeSpecName: "config-data") pod "66ddea4d-3125-44f0-8855-75935dc4b640" (UID: "66ddea4d-3125-44f0-8855-75935dc4b640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.910309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66ddea4d-3125-44f0-8855-75935dc4b640" (UID: "66ddea4d-3125-44f0-8855-75935dc4b640"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.922942 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.923028 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms4c7\" (UniqueName: \"kubernetes.io/projected/66ddea4d-3125-44f0-8855-75935dc4b640-kube-api-access-ms4c7\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.923048 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:08 crc kubenswrapper[4853]: I1122 07:51:08.923059 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66ddea4d-3125-44f0-8855-75935dc4b640-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.792740 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-2vkjv" Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794179 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-central-agent" containerID="cri-o://e63fdee1c185f6a6bad44a59e593d4e78cca5d9a5a37b8e309d865b6fa87ff11" gracePeriod=30 Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794250 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="sg-core" containerID="cri-o://e92ed36a7e69dbfcbbac24fb88c0ba6a12cbb846fc80e738ded3a748f0d85c96" gracePeriod=30 Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794260 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="proxy-httpd" containerID="cri-o://6a08469ba81d67b3f61b2e0948b91aa0ea75514fefc3ee41b0508854295be729" gracePeriod=30 Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794381 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-notification-agent" containerID="cri-o://a43cf04b95f9d219e2a4060e7caf7e5bf8f09db38cc645cb618c449ddf30aa74" gracePeriod=30 Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794664 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerStarted","Data":"6a08469ba81d67b3f61b2e0948b91aa0ea75514fefc3ee41b0508854295be729"} Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.794711 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.837011 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.596343186 podStartE2EDuration="22.836985206s" podCreationTimestamp="2025-11-22 07:50:47 +0000 UTC" firstStartedPulling="2025-11-22 07:50:48.456369095 +0000 UTC m=+2447.296991711" lastFinishedPulling="2025-11-22 07:51:07.697011115 +0000 UTC m=+2466.537633731" observedRunningTime="2025-11-22 07:51:09.825128968 +0000 UTC m=+2468.665751614" watchObservedRunningTime="2025-11-22 07:51:09.836985206 +0000 UTC m=+2468.677607832" Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.953887 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.954144 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-log" containerID="cri-o://a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa" gracePeriod=30 Nov 22 07:51:09 crc kubenswrapper[4853]: I1122 07:51:09.957718 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-api" containerID="cri-o://e25ecc34a2913fffaf0090f0eb190190dfe3748cfdb9fea1e3f28303301ef9b5" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.020023 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.020312 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" containerName="nova-scheduler-scheduler" containerID="cri-o://ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.104720 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.105423 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" containerID="cri-o://4328a3695251afa7578238f22388565a6c26e4029e4791cea4cd9181c0f60790" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.105915 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" containerID="cri-o://0a5f98bb66c683c6e5fd3f8cf44a76de9c76f2de61f2384ef83f3bdc2c1ebfef" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: E1122 07:51:10.386255 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cce67c7_3f8d_4931_986a_5ff6db89e8c6.slice/crio-conmon-a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.809224 4853 generic.go:334] "Generic (PLEG): container finished" podID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerID="a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa" exitCode=143 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.809307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerDied","Data":"a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.812214 4853 generic.go:334] "Generic (PLEG): container finished" podID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerID="4328a3695251afa7578238f22388565a6c26e4029e4791cea4cd9181c0f60790" exitCode=143 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.812301 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerDied","Data":"4328a3695251afa7578238f22388565a6c26e4029e4791cea4cd9181c0f60790"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.814940 4853 generic.go:334] "Generic (PLEG): container finished" podID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerID="6a08469ba81d67b3f61b2e0948b91aa0ea75514fefc3ee41b0508854295be729" exitCode=0 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.814966 4853 generic.go:334] "Generic (PLEG): container finished" podID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerID="e92ed36a7e69dbfcbbac24fb88c0ba6a12cbb846fc80e738ded3a748f0d85c96" exitCode=2 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.814973 4853 generic.go:334] "Generic (PLEG): container finished" podID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerID="e63fdee1c185f6a6bad44a59e593d4e78cca5d9a5a37b8e309d865b6fa87ff11" exitCode=0 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.814998 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerDied","Data":"6a08469ba81d67b3f61b2e0948b91aa0ea75514fefc3ee41b0508854295be729"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.815048 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerDied","Data":"e92ed36a7e69dbfcbbac24fb88c0ba6a12cbb846fc80e738ded3a748f0d85c96"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.815059 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerDied","Data":"e63fdee1c185f6a6bad44a59e593d4e78cca5d9a5a37b8e309d865b6fa87ff11"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.817061 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerStarted","Data":"3a2d15629ae8583586b2aa002e0ac69cf6fb75389e39085740b0f4e577b2f16e"} Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.817238 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-api" containerID="cri-o://19817544de55f39096555895f24dbd6c2507c39adcdeef2f57827e5f888eeacd" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.817267 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-notifier" containerID="cri-o://71ae5750a9bbcf0314c93aa2a6aeeac589c7931877e6a375aa06e511f19c6ec5" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.817317 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-listener" containerID="cri-o://3a2d15629ae8583586b2aa002e0ac69cf6fb75389e39085740b0f4e577b2f16e" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.817310 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-evaluator" containerID="cri-o://0b3c3ed175688b10da46a60441edc272d82d13538ec20d2af50756c305f74227" gracePeriod=30 Nov 22 07:51:10 crc kubenswrapper[4853]: I1122 07:51:10.846282 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.278673204 podStartE2EDuration="26.846260553s" podCreationTimestamp="2025-11-22 07:50:44 +0000 UTC" firstStartedPulling="2025-11-22 07:50:46.009279146 +0000 UTC m=+2444.849901772" lastFinishedPulling="2025-11-22 07:51:09.576866465 +0000 UTC m=+2468.417489121" observedRunningTime="2025-11-22 07:51:10.838904687 +0000 UTC m=+2469.679527323" watchObservedRunningTime="2025-11-22 07:51:10.846260553 +0000 UTC m=+2469.686883179" Nov 22 07:51:11 crc kubenswrapper[4853]: I1122 07:51:11.858532 4853 generic.go:334] "Generic (PLEG): container finished" podID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerID="0b3c3ed175688b10da46a60441edc272d82d13538ec20d2af50756c305f74227" exitCode=0 Nov 22 07:51:11 crc kubenswrapper[4853]: I1122 07:51:11.859243 4853 generic.go:334] "Generic (PLEG): container finished" podID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerID="19817544de55f39096555895f24dbd6c2507c39adcdeef2f57827e5f888eeacd" exitCode=0 Nov 22 07:51:11 crc kubenswrapper[4853]: I1122 07:51:11.858676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerDied","Data":"0b3c3ed175688b10da46a60441edc272d82d13538ec20d2af50756c305f74227"} Nov 22 07:51:11 crc kubenswrapper[4853]: I1122 07:51:11.859298 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerDied","Data":"19817544de55f39096555895f24dbd6c2507c39adcdeef2f57827e5f888eeacd"} Nov 22 07:51:12 crc kubenswrapper[4853]: E1122 07:51:12.640610 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:51:12 crc kubenswrapper[4853]: E1122 07:51:12.643070 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:51:12 crc kubenswrapper[4853]: E1122 07:51:12.645353 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:51:12 crc kubenswrapper[4853]: E1122 07:51:12.645449 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" containerName="nova-scheduler-scheduler" Nov 22 07:51:12 crc kubenswrapper[4853]: I1122 07:51:12.875093 4853 generic.go:334] "Generic (PLEG): container finished" podID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerID="71ae5750a9bbcf0314c93aa2a6aeeac589c7931877e6a375aa06e511f19c6ec5" exitCode=0 Nov 22 07:51:12 crc kubenswrapper[4853]: I1122 07:51:12.875147 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerDied","Data":"71ae5750a9bbcf0314c93aa2a6aeeac589c7931877e6a375aa06e511f19c6ec5"} Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.542958 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": read tcp 10.217.0.2:43490->10.217.1.3:8775: read: connection reset by peer" Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.543001 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": read tcp 10.217.0.2:43492->10.217.1.3:8775: read: connection reset by peer" Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.902932 4853 generic.go:334] "Generic (PLEG): container finished" podID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerID="0a5f98bb66c683c6e5fd3f8cf44a76de9c76f2de61f2384ef83f3bdc2c1ebfef" exitCode=0 Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.903001 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerDied","Data":"0a5f98bb66c683c6e5fd3f8cf44a76de9c76f2de61f2384ef83f3bdc2c1ebfef"} Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.907380 4853 generic.go:334] "Generic (PLEG): container finished" podID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerID="a43cf04b95f9d219e2a4060e7caf7e5bf8f09db38cc645cb618c449ddf30aa74" exitCode=0 Nov 22 07:51:13 crc kubenswrapper[4853]: I1122 07:51:13.907424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerDied","Data":"a43cf04b95f9d219e2a4060e7caf7e5bf8f09db38cc645cb618c449ddf30aa74"} Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.200318 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.295396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs\") pod \"773ad68b-b0f8-4afc-91bd-008f86442be6\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.295682 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm4gk\" (UniqueName: \"kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk\") pod \"773ad68b-b0f8-4afc-91bd-008f86442be6\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.295829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs\") pod \"773ad68b-b0f8-4afc-91bd-008f86442be6\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.295872 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle\") pod \"773ad68b-b0f8-4afc-91bd-008f86442be6\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.295906 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data\") pod \"773ad68b-b0f8-4afc-91bd-008f86442be6\" (UID: \"773ad68b-b0f8-4afc-91bd-008f86442be6\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.296388 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs" (OuterVolumeSpecName: "logs") pod "773ad68b-b0f8-4afc-91bd-008f86442be6" (UID: "773ad68b-b0f8-4afc-91bd-008f86442be6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.297508 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/773ad68b-b0f8-4afc-91bd-008f86442be6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.305999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk" (OuterVolumeSpecName: "kube-api-access-wm4gk") pod "773ad68b-b0f8-4afc-91bd-008f86442be6" (UID: "773ad68b-b0f8-4afc-91bd-008f86442be6"). InnerVolumeSpecName "kube-api-access-wm4gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.351150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "773ad68b-b0f8-4afc-91bd-008f86442be6" (UID: "773ad68b-b0f8-4afc-91bd-008f86442be6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.362590 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data" (OuterVolumeSpecName: "config-data") pod "773ad68b-b0f8-4afc-91bd-008f86442be6" (UID: "773ad68b-b0f8-4afc-91bd-008f86442be6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.400023 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm4gk\" (UniqueName: \"kubernetes.io/projected/773ad68b-b0f8-4afc-91bd-008f86442be6-kube-api-access-wm4gk\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.400055 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.400065 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.413155 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "773ad68b-b0f8-4afc-91bd-008f86442be6" (UID: "773ad68b-b0f8-4afc-91bd-008f86442be6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.508119 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/773ad68b-b0f8-4afc-91bd-008f86442be6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.633225 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716082 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm8hf\" (UniqueName: \"kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716353 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716468 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716506 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716550 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716585 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.716699 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd\") pod \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\" (UID: \"afcc3ae8-8b9d-4c00-b2c2-c601accbe056\") " Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.717803 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.719938 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.722172 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts" (OuterVolumeSpecName: "scripts") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.730231 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf" (OuterVolumeSpecName: "kube-api-access-nm8hf") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "kube-api-access-nm8hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.783360 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.841682 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.841714 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.841726 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.841740 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm8hf\" (UniqueName: \"kubernetes.io/projected/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-kube-api-access-nm8hf\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.841771 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.897625 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.920817 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data" (OuterVolumeSpecName: "config-data") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.936296 4853 generic.go:334] "Generic (PLEG): container finished" podID="c79984de-ac53-48e9-b443-a5b7128315ef" containerID="ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" exitCode=0 Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.936394 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c79984de-ac53-48e9-b443-a5b7128315ef","Type":"ContainerDied","Data":"ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931"} Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.941937 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.941936 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afcc3ae8-8b9d-4c00-b2c2-c601accbe056","Type":"ContainerDied","Data":"7064545bf7693ffa43a063e3ddf1e5efecb912fefbaeb97721e47dc37209d4ab"} Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.942288 4853 scope.go:117] "RemoveContainer" containerID="6a08469ba81d67b3f61b2e0948b91aa0ea75514fefc3ee41b0508854295be729" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.944421 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.944454 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.950439 4853 generic.go:334] "Generic (PLEG): container finished" podID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerID="e25ecc34a2913fffaf0090f0eb190190dfe3748cfdb9fea1e3f28303301ef9b5" exitCode=0 Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.950582 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerDied","Data":"e25ecc34a2913fffaf0090f0eb190190dfe3748cfdb9fea1e3f28303301ef9b5"} Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.954539 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afcc3ae8-8b9d-4c00-b2c2-c601accbe056" (UID: "afcc3ae8-8b9d-4c00-b2c2-c601accbe056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.963169 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"773ad68b-b0f8-4afc-91bd-008f86442be6","Type":"ContainerDied","Data":"dd4d06c8f04bccd9a2fba804004b37451d7d185a0c7f91df65c111bf82bc43d3"} Nov 22 07:51:14 crc kubenswrapper[4853]: I1122 07:51:14.963384 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.039565 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.051772 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afcc3ae8-8b9d-4c00-b2c2-c601accbe056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.060284 4853 scope.go:117] "RemoveContainer" containerID="e92ed36a7e69dbfcbbac24fb88c0ba6a12cbb846fc80e738ded3a748f0d85c96" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.066554 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.093799 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.102939 4853 scope.go:117] "RemoveContainer" containerID="a43cf04b95f9d219e2a4060e7caf7e5bf8f09db38cc645cb618c449ddf30aa74" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.138220 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139264 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" containerName="nova-scheduler-scheduler" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139280 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" containerName="nova-scheduler-scheduler" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139299 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="sg-core" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139306 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="sg-core" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139327 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-central-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139334 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-central-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139353 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-notification-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139360 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-notification-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139383 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="proxy-httpd" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139390 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="proxy-httpd" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139397 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66ddea4d-3125-44f0-8855-75935dc4b640" containerName="nova-manage" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139403 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="66ddea4d-3125-44f0-8855-75935dc4b640" containerName="nova-manage" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139419 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139425 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.139450 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139456 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139697 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-central-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139721 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="ceilometer-notification-agent" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139738 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="66ddea4d-3125-44f0-8855-75935dc4b640" containerName="nova-manage" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139767 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="proxy-httpd" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139788 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" containerName="nova-scheduler-scheduler" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139799 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" containerName="sg-core" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139810 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-metadata" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.139819 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" containerName="nova-metadata-log" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.141315 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.144856 4853 scope.go:117] "RemoveContainer" containerID="e63fdee1c185f6a6bad44a59e593d4e78cca5d9a5a37b8e309d865b6fa87ff11" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.145393 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.146375 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.155007 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.155032 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle\") pod \"c79984de-ac53-48e9-b443-a5b7128315ef\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.155312 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h4mw\" (UniqueName: \"kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw\") pod \"c79984de-ac53-48e9-b443-a5b7128315ef\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.155429 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data\") pod \"c79984de-ac53-48e9-b443-a5b7128315ef\" (UID: \"c79984de-ac53-48e9-b443-a5b7128315ef\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.160454 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw" (OuterVolumeSpecName: "kube-api-access-8h4mw") pod "c79984de-ac53-48e9-b443-a5b7128315ef" (UID: "c79984de-ac53-48e9-b443-a5b7128315ef"). InnerVolumeSpecName "kube-api-access-8h4mw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.206168 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c79984de-ac53-48e9-b443-a5b7128315ef" (UID: "c79984de-ac53-48e9-b443-a5b7128315ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.219073 4853 scope.go:117] "RemoveContainer" containerID="0a5f98bb66c683c6e5fd3f8cf44a76de9c76f2de61f2384ef83f3bdc2c1ebfef" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.235493 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data" (OuterVolumeSpecName: "config-data") pod "c79984de-ac53-48e9-b443-a5b7128315ef" (UID: "c79984de-ac53-48e9-b443-a5b7128315ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.259096 4853 scope.go:117] "RemoveContainer" containerID="4328a3695251afa7578238f22388565a6c26e4029e4791cea4cd9181c0f60790" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.261280 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-logs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.261642 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfkd\" (UniqueName: \"kubernetes.io/projected/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-kube-api-access-2cfkd\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.261869 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-config-data\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.262039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.262192 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.262457 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.262478 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h4mw\" (UniqueName: \"kubernetes.io/projected/c79984de-ac53-48e9-b443-a5b7128315ef-kube-api-access-8h4mw\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.262499 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79984de-ac53-48e9-b443-a5b7128315ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.288613 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.301926 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.327795 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365328 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llmpv\" (UniqueName: \"kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365725 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365821 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365919 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.365962 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs\") pod \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\" (UID: \"3cce67c7-3f8d-4931-986a-5ff6db89e8c6\") " Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.366926 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-logs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.367338 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfkd\" (UniqueName: \"kubernetes.io/projected/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-kube-api-access-2cfkd\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.367544 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-config-data\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.367578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.367647 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.369081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs" (OuterVolumeSpecName: "logs") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.379080 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-logs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.380942 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.384550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-config-data\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.384885 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv" (OuterVolumeSpecName: "kube-api-access-llmpv") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "kube-api-access-llmpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.398998 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.400253 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-log" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.400281 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-log" Nov 22 07:51:15 crc kubenswrapper[4853]: E1122 07:51:15.400328 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-api" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.400335 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-api" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.401019 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-api" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.401060 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" containerName="nova-api-log" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.403301 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.407686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.410695 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.416662 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.416659 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.417144 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfkd\" (UniqueName: \"kubernetes.io/projected/9292105c-7a7d-42cf-a8a1-6074ebebc6f4-kube-api-access-2cfkd\") pod \"nova-metadata-0\" (UID: \"9292105c-7a7d-42cf-a8a1-6074ebebc6f4\") " pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.432171 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.449641 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.472272 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data" (OuterVolumeSpecName: "config-data") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.473287 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.473874 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.473961 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474092 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474204 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474273 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474369 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lbfx\" (UniqueName: \"kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474418 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474651 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474666 4853 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474677 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.474691 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llmpv\" (UniqueName: \"kubernetes.io/projected/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-kube-api-access-llmpv\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.491692 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.531003 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.535858 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3cce67c7-3f8d-4931-986a-5ff6db89e8c6" (UID: "3cce67c7-3f8d-4931-986a-5ff6db89e8c6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580356 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580528 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580616 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580741 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lbfx\" (UniqueName: \"kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580787 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.580954 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.583057 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.583375 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.583396 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cce67c7-3f8d-4931-986a-5ff6db89e8c6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.584049 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.594092 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.594528 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.594996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.597001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.598936 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.599708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lbfx\" (UniqueName: \"kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx\") pod \"ceilometer-0\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.739458 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.792528 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773ad68b-b0f8-4afc-91bd-008f86442be6" path="/var/lib/kubelet/pods/773ad68b-b0f8-4afc-91bd-008f86442be6/volumes" Nov 22 07:51:15 crc kubenswrapper[4853]: I1122 07:51:15.793267 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afcc3ae8-8b9d-4c00-b2c2-c601accbe056" path="/var/lib/kubelet/pods/afcc3ae8-8b9d-4c00-b2c2-c601accbe056/volumes" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.016928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c79984de-ac53-48e9-b443-a5b7128315ef","Type":"ContainerDied","Data":"4629a74ff08aa06bef759e0a1d107087d8982f47a589adb4bca716d4fb96258d"} Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.017305 4853 scope.go:117] "RemoveContainer" containerID="ed9bebd0b041d4b3ee13d5824ecd046887dc7d5377cbad11717b37c9a3b7c931" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.017582 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.025242 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cce67c7-3f8d-4931-986a-5ff6db89e8c6","Type":"ContainerDied","Data":"0878d1eec4f70d7d3113c4a32b157cd68d837e64096c1236c54f09d4ce16cc59"} Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.025345 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.053029 4853 scope.go:117] "RemoveContainer" containerID="e25ecc34a2913fffaf0090f0eb190190dfe3748cfdb9fea1e3f28303301ef9b5" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.079891 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.119022 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.149065 4853 scope.go:117] "RemoveContainer" containerID="a3c582deba6728dcdf27393506ccb8cad8c7f1dbbb1106bb9b8c986b8763aeaa" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.155480 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.177339 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.179517 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.182423 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.204788 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.238843 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.248932 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.251464 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.257407 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.257548 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.258823 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.261862 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325174 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98f5\" (UniqueName: \"kubernetes.io/projected/bfa0c19f-e6cf-4db2-a88c-76388997551c-kube-api-access-g98f5\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325522 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa0c19f-e6cf-4db2-a88c-76388997551c-logs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325588 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-config-data\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325742 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.325861 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-config-data\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.326053 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.326215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.326284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdx2f\" (UniqueName: \"kubernetes.io/projected/91458107-9648-4958-ae6c-54457f8744f6-kube-api-access-bdx2f\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.428864 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.428926 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.428983 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-config-data\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429072 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429117 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429150 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdx2f\" (UniqueName: \"kubernetes.io/projected/91458107-9648-4958-ae6c-54457f8744f6-kube-api-access-bdx2f\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429189 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g98f5\" (UniqueName: \"kubernetes.io/projected/bfa0c19f-e6cf-4db2-a88c-76388997551c-kube-api-access-g98f5\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429220 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa0c19f-e6cf-4db2-a88c-76388997551c-logs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.429246 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-config-data\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.431304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfa0c19f-e6cf-4db2-a88c-76388997551c-logs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.439847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-public-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.442981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.443146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91458107-9648-4958-ae6c-54457f8744f6-config-data\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.443314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-config-data\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.443916 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.452413 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfa0c19f-e6cf-4db2-a88c-76388997551c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.452851 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g98f5\" (UniqueName: \"kubernetes.io/projected/bfa0c19f-e6cf-4db2-a88c-76388997551c-kube-api-access-g98f5\") pod \"nova-api-0\" (UID: \"bfa0c19f-e6cf-4db2-a88c-76388997551c\") " pod="openstack/nova-api-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.453283 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdx2f\" (UniqueName: \"kubernetes.io/projected/91458107-9648-4958-ae6c-54457f8744f6-kube-api-access-bdx2f\") pod \"nova-scheduler-0\" (UID: \"91458107-9648-4958-ae6c-54457f8744f6\") " pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.516327 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.528639 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: W1122 07:51:16.529187 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9292105c_7a7d_42cf_a8a1_6074ebebc6f4.slice/crio-d6ec15208e827a7cabcee45cb18a721a2331c430d168cd2ade0c42ec5febcfe0 WatchSource:0}: Error finding container d6ec15208e827a7cabcee45cb18a721a2331c430d168cd2ade0c42ec5febcfe0: Status 404 returned error can't find the container with id d6ec15208e827a7cabcee45cb18a721a2331c430d168cd2ade0c42ec5febcfe0 Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.537741 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:51:16 crc kubenswrapper[4853]: W1122 07:51:16.549297 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d624d32_29f3_4311_a64d_add96283eec4.slice/crio-c4201417f548b60906a416199a6647bc1b55ff042856b7a4a698d7492a104ef3 WatchSource:0}: Error finding container c4201417f548b60906a416199a6647bc1b55ff042856b7a4a698d7492a104ef3: Status 404 returned error can't find the container with id c4201417f548b60906a416199a6647bc1b55ff042856b7a4a698d7492a104ef3 Nov 22 07:51:16 crc kubenswrapper[4853]: I1122 07:51:16.576251 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.027669 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:51:17 crc kubenswrapper[4853]: W1122 07:51:17.028992 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91458107_9648_4958_ae6c_54457f8744f6.slice/crio-4661ce7d6b338c1e8515cf3397b5e0c3d83a2593ca2af994a0aa64dbbd4519be WatchSource:0}: Error finding container 4661ce7d6b338c1e8515cf3397b5e0c3d83a2593ca2af994a0aa64dbbd4519be: Status 404 returned error can't find the container with id 4661ce7d6b338c1e8515cf3397b5e0c3d83a2593ca2af994a0aa64dbbd4519be Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.045238 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9292105c-7a7d-42cf-a8a1-6074ebebc6f4","Type":"ContainerStarted","Data":"d6ec15208e827a7cabcee45cb18a721a2331c430d168cd2ade0c42ec5febcfe0"} Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.049403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91458107-9648-4958-ae6c-54457f8744f6","Type":"ContainerStarted","Data":"4661ce7d6b338c1e8515cf3397b5e0c3d83a2593ca2af994a0aa64dbbd4519be"} Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.051333 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerStarted","Data":"c4201417f548b60906a416199a6647bc1b55ff042856b7a4a698d7492a104ef3"} Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.191849 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:51:17 crc kubenswrapper[4853]: W1122 07:51:17.203829 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa0c19f_e6cf_4db2_a88c_76388997551c.slice/crio-b6ad255a533542762282f2e840cdb10dbbd1d4de8b147dd28dd8794fdc99c9fa WatchSource:0}: Error finding container b6ad255a533542762282f2e840cdb10dbbd1d4de8b147dd28dd8794fdc99c9fa: Status 404 returned error can't find the container with id b6ad255a533542762282f2e840cdb10dbbd1d4de8b147dd28dd8794fdc99c9fa Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.761510 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cce67c7-3f8d-4931-986a-5ff6db89e8c6" path="/var/lib/kubelet/pods/3cce67c7-3f8d-4931-986a-5ff6db89e8c6/volumes" Nov 22 07:51:17 crc kubenswrapper[4853]: I1122 07:51:17.762557 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79984de-ac53-48e9-b443-a5b7128315ef" path="/var/lib/kubelet/pods/c79984de-ac53-48e9-b443-a5b7128315ef/volumes" Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.079595 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bfa0c19f-e6cf-4db2-a88c-76388997551c","Type":"ContainerStarted","Data":"622693cb9cf47af4bbc0f13e5c726c013dd88a05bdb153789fa149f4da0915b5"} Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.080019 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bfa0c19f-e6cf-4db2-a88c-76388997551c","Type":"ContainerStarted","Data":"b6ad255a533542762282f2e840cdb10dbbd1d4de8b147dd28dd8794fdc99c9fa"} Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.083230 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"91458107-9648-4958-ae6c-54457f8744f6","Type":"ContainerStarted","Data":"b2fe228a1e4764129491ca5b038ccbdc85bbf9d96c651122dab7d14184da40d1"} Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.086514 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9292105c-7a7d-42cf-a8a1-6074ebebc6f4","Type":"ContainerStarted","Data":"144228b9af8c3eebebf943c5b239820dc630e6a6232950515bd779774932debf"} Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.086557 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9292105c-7a7d-42cf-a8a1-6074ebebc6f4","Type":"ContainerStarted","Data":"f600b4b2d4a640f162f808afe3b2666cce456b8c2943c76b30715ce35f3b7d43"} Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.106312 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.10628974 podStartE2EDuration="2.10628974s" podCreationTimestamp="2025-11-22 07:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:51:18.099874607 +0000 UTC m=+2476.940497263" watchObservedRunningTime="2025-11-22 07:51:18.10628974 +0000 UTC m=+2476.946912366" Nov 22 07:51:18 crc kubenswrapper[4853]: I1122 07:51:18.141980 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.141960415 podStartE2EDuration="3.141960415s" podCreationTimestamp="2025-11-22 07:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:51:18.125701229 +0000 UTC m=+2476.966323865" watchObservedRunningTime="2025-11-22 07:51:18.141960415 +0000 UTC m=+2476.982583041" Nov 22 07:51:19 crc kubenswrapper[4853]: I1122 07:51:19.102444 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bfa0c19f-e6cf-4db2-a88c-76388997551c","Type":"ContainerStarted","Data":"c05856b8cf33d895e9463b01104479fc9293c09715de7d4a2da9c608cb259d36"} Nov 22 07:51:19 crc kubenswrapper[4853]: I1122 07:51:19.129573 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.129553212 podStartE2EDuration="3.129553212s" podCreationTimestamp="2025-11-22 07:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:51:19.127282981 +0000 UTC m=+2477.967905617" watchObservedRunningTime="2025-11-22 07:51:19.129553212 +0000 UTC m=+2477.970175838" Nov 22 07:51:20 crc kubenswrapper[4853]: I1122 07:51:20.492711 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:51:20 crc kubenswrapper[4853]: I1122 07:51:20.493302 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:51:21 crc kubenswrapper[4853]: I1122 07:51:21.126352 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerStarted","Data":"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd"} Nov 22 07:51:21 crc kubenswrapper[4853]: I1122 07:51:21.517233 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:51:22 crc kubenswrapper[4853]: I1122 07:51:22.141938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerStarted","Data":"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635"} Nov 22 07:51:23 crc kubenswrapper[4853]: I1122 07:51:23.159866 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerStarted","Data":"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd"} Nov 22 07:51:24 crc kubenswrapper[4853]: I1122 07:51:24.184407 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerStarted","Data":"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981"} Nov 22 07:51:24 crc kubenswrapper[4853]: I1122 07:51:24.185245 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:51:24 crc kubenswrapper[4853]: I1122 07:51:24.244508 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.998593934 podStartE2EDuration="9.2444801s" podCreationTimestamp="2025-11-22 07:51:15 +0000 UTC" firstStartedPulling="2025-11-22 07:51:16.557984566 +0000 UTC m=+2475.398607182" lastFinishedPulling="2025-11-22 07:51:23.803870702 +0000 UTC m=+2482.644493348" observedRunningTime="2025-11-22 07:51:24.22246763 +0000 UTC m=+2483.063090266" watchObservedRunningTime="2025-11-22 07:51:24.2444801 +0000 UTC m=+2483.085102736" Nov 22 07:51:25 crc kubenswrapper[4853]: I1122 07:51:25.493694 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:51:25 crc kubenswrapper[4853]: I1122 07:51:25.494128 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.515003 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9292105c-7a7d-42cf-a8a1-6074ebebc6f4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.515171 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9292105c-7a7d-42cf-a8a1-6074ebebc6f4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.14:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.518050 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.550136 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.577549 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:51:26 crc kubenswrapper[4853]: I1122 07:51:26.577601 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:51:27 crc kubenswrapper[4853]: I1122 07:51:27.273366 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:51:27 crc kubenswrapper[4853]: I1122 07:51:27.606678 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bfa0c19f-e6cf-4db2-a88c-76388997551c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:51:27 crc kubenswrapper[4853]: I1122 07:51:27.606722 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bfa0c19f-e6cf-4db2-a88c-76388997551c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.17:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:51:31 crc kubenswrapper[4853]: I1122 07:51:31.297367 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:51:31 crc kubenswrapper[4853]: I1122 07:51:31.298129 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:51:31 crc kubenswrapper[4853]: I1122 07:51:31.298197 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 07:51:31 crc kubenswrapper[4853]: I1122 07:51:31.299306 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:51:31 crc kubenswrapper[4853]: I1122 07:51:31.299382 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" gracePeriod=600 Nov 22 07:51:31 crc kubenswrapper[4853]: E1122 07:51:31.447029 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:51:32 crc kubenswrapper[4853]: I1122 07:51:32.284606 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" exitCode=0 Nov 22 07:51:32 crc kubenswrapper[4853]: I1122 07:51:32.284692 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de"} Nov 22 07:51:32 crc kubenswrapper[4853]: I1122 07:51:32.284987 4853 scope.go:117] "RemoveContainer" containerID="93242edc98369aed066eebfb95cc23d28e71df7ebef2302dd5a716d3fb81aedd" Nov 22 07:51:32 crc kubenswrapper[4853]: I1122 07:51:32.286018 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:51:32 crc kubenswrapper[4853]: E1122 07:51:32.286381 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:51:35 crc kubenswrapper[4853]: I1122 07:51:35.504133 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:51:35 crc kubenswrapper[4853]: I1122 07:51:35.504697 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:51:35 crc kubenswrapper[4853]: I1122 07:51:35.510844 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:51:35 crc kubenswrapper[4853]: I1122 07:51:35.511695 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:51:36 crc kubenswrapper[4853]: I1122 07:51:36.623332 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:51:36 crc kubenswrapper[4853]: I1122 07:51:36.624142 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:51:36 crc kubenswrapper[4853]: I1122 07:51:36.660550 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:51:36 crc kubenswrapper[4853]: I1122 07:51:36.688991 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:51:37 crc kubenswrapper[4853]: I1122 07:51:37.344709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:51:37 crc kubenswrapper[4853]: I1122 07:51:37.353291 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:51:39 crc kubenswrapper[4853]: I1122 07:51:39.121167 4853 scope.go:117] "RemoveContainer" containerID="2fef05ea5e3d441fe9fb192e15b5ac4bfacf586bae220c5364d42b62e3be6f8f" Nov 22 07:51:39 crc kubenswrapper[4853]: I1122 07:51:39.145817 4853 scope.go:117] "RemoveContainer" containerID="739b31c91720f2ec0951dab78f0a956c3fd5e6b021ba0ebea0f5224904573651" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.416517 4853 generic.go:334] "Generic (PLEG): container finished" podID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerID="3a2d15629ae8583586b2aa002e0ac69cf6fb75389e39085740b0f4e577b2f16e" exitCode=137 Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.416623 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerDied","Data":"3a2d15629ae8583586b2aa002e0ac69cf6fb75389e39085740b0f4e577b2f16e"} Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.417160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"75849edb-9f0f-49d2-97b5-ca5070f3116f","Type":"ContainerDied","Data":"e6edc282efe45b2ed946954f20de03f92f37609c59095880f39932eabfafe48e"} Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.417176 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6edc282efe45b2ed946954f20de03f92f37609c59095880f39932eabfafe48e" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.473502 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.540725 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data\") pod \"75849edb-9f0f-49d2-97b5-ca5070f3116f\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.541390 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts\") pod \"75849edb-9f0f-49d2-97b5-ca5070f3116f\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.541427 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle\") pod \"75849edb-9f0f-49d2-97b5-ca5070f3116f\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.541566 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb4n9\" (UniqueName: \"kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9\") pod \"75849edb-9f0f-49d2-97b5-ca5070f3116f\" (UID: \"75849edb-9f0f-49d2-97b5-ca5070f3116f\") " Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.559737 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts" (OuterVolumeSpecName: "scripts") pod "75849edb-9f0f-49d2-97b5-ca5070f3116f" (UID: "75849edb-9f0f-49d2-97b5-ca5070f3116f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.560287 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9" (OuterVolumeSpecName: "kube-api-access-xb4n9") pod "75849edb-9f0f-49d2-97b5-ca5070f3116f" (UID: "75849edb-9f0f-49d2-97b5-ca5070f3116f"). InnerVolumeSpecName "kube-api-access-xb4n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.647364 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.647397 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb4n9\" (UniqueName: \"kubernetes.io/projected/75849edb-9f0f-49d2-97b5-ca5070f3116f-kube-api-access-xb4n9\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.714723 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75849edb-9f0f-49d2-97b5-ca5070f3116f" (UID: "75849edb-9f0f-49d2-97b5-ca5070f3116f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.719896 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data" (OuterVolumeSpecName: "config-data") pod "75849edb-9f0f-49d2-97b5-ca5070f3116f" (UID: "75849edb-9f0f-49d2-97b5-ca5070f3116f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.751704 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:41 crc kubenswrapper[4853]: I1122 07:51:41.751769 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75849edb-9f0f-49d2-97b5-ca5070f3116f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.428013 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.461356 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.474639 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.493709 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 07:51:42 crc kubenswrapper[4853]: E1122 07:51:42.494372 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-api" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494398 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-api" Nov 22 07:51:42 crc kubenswrapper[4853]: E1122 07:51:42.494423 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-evaluator" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494432 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-evaluator" Nov 22 07:51:42 crc kubenswrapper[4853]: E1122 07:51:42.494466 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-listener" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494476 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-listener" Nov 22 07:51:42 crc kubenswrapper[4853]: E1122 07:51:42.494517 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-notifier" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494525 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-notifier" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494838 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-notifier" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494883 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-listener" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494895 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-api" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.494919 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" containerName="aodh-evaluator" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.497934 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.500379 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.502930 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.502970 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.502942 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.503165 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jm7rg" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.522055 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572428 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572562 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwd8b\" (UniqueName: \"kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572608 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.572872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.674786 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.674846 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwd8b\" (UniqueName: \"kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.674893 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.674974 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.674995 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.675076 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.680869 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.681103 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.682043 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.682846 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.683582 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.695415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwd8b\" (UniqueName: \"kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b\") pod \"aodh-0\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " pod="openstack/aodh-0" Nov 22 07:51:42 crc kubenswrapper[4853]: I1122 07:51:42.827302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:51:43 crc kubenswrapper[4853]: I1122 07:51:43.427006 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:51:43 crc kubenswrapper[4853]: I1122 07:51:43.447460 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerStarted","Data":"e5afdd11d36a1121a4de580b9e9dc191d1f0e290957fb13646fd8be338abba19"} Nov 22 07:51:43 crc kubenswrapper[4853]: I1122 07:51:43.776708 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75849edb-9f0f-49d2-97b5-ca5070f3116f" path="/var/lib/kubelet/pods/75849edb-9f0f-49d2-97b5-ca5070f3116f/volumes" Nov 22 07:51:44 crc kubenswrapper[4853]: I1122 07:51:44.461336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerStarted","Data":"73e660bb306fc1c79ba24154aac45e062fa34e65e8ff6fe2c6d7d8f494f7ecaf"} Nov 22 07:51:45 crc kubenswrapper[4853]: I1122 07:51:45.478785 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerStarted","Data":"119076ac656be12ef9cbd92c02245e92909b2fd32c67e9bb318a59e7449657e3"} Nov 22 07:51:45 crc kubenswrapper[4853]: I1122 07:51:45.766445 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:51:46 crc kubenswrapper[4853]: I1122 07:51:46.494413 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerStarted","Data":"1c25e8cabcdd6fd992c03e3b1b652252a0ce167545fa9b8be201b4fc99f726dd"} Nov 22 07:51:46 crc kubenswrapper[4853]: I1122 07:51:46.494832 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerStarted","Data":"00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a"} Nov 22 07:51:46 crc kubenswrapper[4853]: I1122 07:51:46.533894 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.810320376 podStartE2EDuration="4.533863715s" podCreationTimestamp="2025-11-22 07:51:42 +0000 UTC" firstStartedPulling="2025-11-22 07:51:43.428245816 +0000 UTC m=+2502.268868442" lastFinishedPulling="2025-11-22 07:51:46.151789155 +0000 UTC m=+2504.992411781" observedRunningTime="2025-11-22 07:51:46.516925901 +0000 UTC m=+2505.357548537" watchObservedRunningTime="2025-11-22 07:51:46.533863715 +0000 UTC m=+2505.374486341" Nov 22 07:51:46 crc kubenswrapper[4853]: I1122 07:51:46.749005 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:51:46 crc kubenswrapper[4853]: E1122 07:51:46.749608 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:51:57 crc kubenswrapper[4853]: I1122 07:51:57.747501 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:51:57 crc kubenswrapper[4853]: E1122 07:51:57.748402 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:51:59 crc kubenswrapper[4853]: I1122 07:51:59.998949 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-7xksh"] Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.010460 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-7xksh"] Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.106931 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-sbcxc"] Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.108908 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.118714 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sbcxc"] Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.160101 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb9kr\" (UniqueName: \"kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.160251 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.160316 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.263464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb9kr\" (UniqueName: \"kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.263594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.263652 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.278170 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.278506 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.284487 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb9kr\" (UniqueName: \"kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr\") pod \"heat-db-sync-sbcxc\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:00 crc kubenswrapper[4853]: I1122 07:52:00.474523 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:01 crc kubenswrapper[4853]: W1122 07:52:01.010094 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod713a48af_8f99_42ce_ba64_25dd0645ef66.slice/crio-3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2 WatchSource:0}: Error finding container 3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2: Status 404 returned error can't find the container with id 3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2 Nov 22 07:52:01 crc kubenswrapper[4853]: I1122 07:52:01.011730 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-sbcxc"] Nov 22 07:52:01 crc kubenswrapper[4853]: I1122 07:52:01.683248 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sbcxc" event={"ID":"713a48af-8f99-42ce-ba64-25dd0645ef66","Type":"ContainerStarted","Data":"3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2"} Nov 22 07:52:01 crc kubenswrapper[4853]: I1122 07:52:01.766867 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a08a523-61a0-4155-b389-0491bcd97e84" path="/var/lib/kubelet/pods/5a08a523-61a0-4155-b389-0491bcd97e84/volumes" Nov 22 07:52:02 crc kubenswrapper[4853]: I1122 07:52:02.393377 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:02 crc kubenswrapper[4853]: I1122 07:52:02.395024 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-central-agent" containerID="cri-o://51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd" gracePeriod=30 Nov 22 07:52:02 crc kubenswrapper[4853]: I1122 07:52:02.395191 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="sg-core" containerID="cri-o://1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd" gracePeriod=30 Nov 22 07:52:02 crc kubenswrapper[4853]: I1122 07:52:02.395212 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="proxy-httpd" containerID="cri-o://f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981" gracePeriod=30 Nov 22 07:52:02 crc kubenswrapper[4853]: I1122 07:52:02.395249 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-notification-agent" containerID="cri-o://52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635" gracePeriod=30 Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.738591 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789417 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d624d32-29f3-4311-a64d-add96283eec4" containerID="f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981" exitCode=0 Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789465 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d624d32-29f3-4311-a64d-add96283eec4" containerID="1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd" exitCode=2 Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789478 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d624d32-29f3-4311-a64d-add96283eec4" containerID="51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd" exitCode=0 Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerDied","Data":"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981"} Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789536 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerDied","Data":"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd"} Nov 22 07:52:03 crc kubenswrapper[4853]: I1122 07:52:03.789546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerDied","Data":"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd"} Nov 22 07:52:04 crc kubenswrapper[4853]: I1122 07:52:04.729284 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.056282 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.191784 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192333 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192535 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192605 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192643 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lbfx\" (UniqueName: \"kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192722 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml\") pod \"6d624d32-29f3-4311-a64d-add96283eec4\" (UID: \"6d624d32-29f3-4311-a64d-add96283eec4\") " Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.192925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.194353 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.195219 4853 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.195356 4853 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d624d32-29f3-4311-a64d-add96283eec4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.199655 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx" (OuterVolumeSpecName: "kube-api-access-5lbfx") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "kube-api-access-5lbfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.199959 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts" (OuterVolumeSpecName: "scripts") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.203317 4853 generic.go:334] "Generic (PLEG): container finished" podID="6d624d32-29f3-4311-a64d-add96283eec4" containerID="52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635" exitCode=0 Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.203433 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerDied","Data":"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635"} Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.203551 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6d624d32-29f3-4311-a64d-add96283eec4","Type":"ContainerDied","Data":"c4201417f548b60906a416199a6647bc1b55ff042856b7a4a698d7492a104ef3"} Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.203655 4853 scope.go:117] "RemoveContainer" containerID="f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.203940 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.258857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.302129 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.302173 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lbfx\" (UniqueName: \"kubernetes.io/projected/6d624d32-29f3-4311-a64d-add96283eec4-kube-api-access-5lbfx\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.302186 4853 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.333573 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.377025 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data" (OuterVolumeSpecName: "config-data") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.386594 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d624d32-29f3-4311-a64d-add96283eec4" (UID: "6d624d32-29f3-4311-a64d-add96283eec4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.405303 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.405344 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.405357 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d624d32-29f3-4311-a64d-add96283eec4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.416389 4853 scope.go:117] "RemoveContainer" containerID="1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.471070 4853 scope.go:117] "RemoveContainer" containerID="52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.514780 4853 scope.go:117] "RemoveContainer" containerID="51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.552898 4853 scope.go:117] "RemoveContainer" containerID="f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.553712 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981\": container with ID starting with f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981 not found: ID does not exist" containerID="f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.553789 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981"} err="failed to get container status \"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981\": rpc error: code = NotFound desc = could not find container \"f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981\": container with ID starting with f2926127534cf5ee7ad404d7a4c87ed6f698da9cd96dd324c23c27e43c07f981 not found: ID does not exist" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.553815 4853 scope.go:117] "RemoveContainer" containerID="1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.554199 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd\": container with ID starting with 1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd not found: ID does not exist" containerID="1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.554234 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd"} err="failed to get container status \"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd\": rpc error: code = NotFound desc = could not find container \"1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd\": container with ID starting with 1e0c7e86949222aee00d2317241917ea59ee264e69098bb0e71061cb922c80cd not found: ID does not exist" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.554252 4853 scope.go:117] "RemoveContainer" containerID="52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.554533 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635\": container with ID starting with 52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635 not found: ID does not exist" containerID="52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.554568 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635"} err="failed to get container status \"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635\": rpc error: code = NotFound desc = could not find container \"52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635\": container with ID starting with 52f61c84b58c81fdc6b48371a52010220abe8fb8eaa26877e8f3c3319c50a635 not found: ID does not exist" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.554586 4853 scope.go:117] "RemoveContainer" containerID="51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.554806 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd\": container with ID starting with 51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd not found: ID does not exist" containerID="51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.554828 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd"} err="failed to get container status \"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd\": rpc error: code = NotFound desc = could not find container \"51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd\": container with ID starting with 51b07feb1912c00397d0b0b933bea13c59d7df4f982750c009afd5d87dd48efd not found: ID does not exist" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.565823 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.578931 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.602416 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.603083 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-central-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603103 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-central-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.603121 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="proxy-httpd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603127 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="proxy-httpd" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.603144 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-notification-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603150 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-notification-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: E1122 07:52:07.603336 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="sg-core" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603345 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="sg-core" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603575 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-notification-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603606 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="sg-core" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603616 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="proxy-httpd" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.603631 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d624d32-29f3-4311-a64d-add96283eec4" containerName="ceilometer-central-agent" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.606514 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.609659 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.609849 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.612569 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.621524 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719227 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-config-data\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719521 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-run-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719663 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-scripts\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719722 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719810 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncrh\" (UniqueName: \"kubernetes.io/projected/58a7dcf9-4712-4ffe-90d1-ea827dc02982-kube-api-access-bncrh\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.719914 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.720047 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.720258 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-log-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.766608 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d624d32-29f3-4311-a64d-add96283eec4" path="/var/lib/kubelet/pods/6d624d32-29f3-4311-a64d-add96283eec4/volumes" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823109 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncrh\" (UniqueName: \"kubernetes.io/projected/58a7dcf9-4712-4ffe-90d1-ea827dc02982-kube-api-access-bncrh\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823559 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-log-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823660 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-config-data\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823824 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-run-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823916 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-scripts\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.823961 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.825252 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-log-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.828466 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/58a7dcf9-4712-4ffe-90d1-ea827dc02982-run-httpd\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.830686 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.831803 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.832970 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.833382 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-config-data\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.834987 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58a7dcf9-4712-4ffe-90d1-ea827dc02982-scripts\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.886734 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncrh\" (UniqueName: \"kubernetes.io/projected/58a7dcf9-4712-4ffe-90d1-ea827dc02982-kube-api-access-bncrh\") pod \"ceilometer-0\" (UID: \"58a7dcf9-4712-4ffe-90d1-ea827dc02982\") " pod="openstack/ceilometer-0" Nov 22 07:52:07 crc kubenswrapper[4853]: I1122 07:52:07.964565 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:52:08 crc kubenswrapper[4853]: I1122 07:52:08.708579 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:52:09 crc kubenswrapper[4853]: I1122 07:52:09.237388 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"b79183225d1926b0fbe8874cbb825a79a2c1d1af2506af35a402f77be5863e95"} Nov 22 07:52:09 crc kubenswrapper[4853]: I1122 07:52:09.687319 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" containerID="cri-o://9c5bd95c35228d58bf34b8225ce8dbcb5740ed4739cc97152d70ae88d49e62d7" gracePeriod=604795 Nov 22 07:52:09 crc kubenswrapper[4853]: I1122 07:52:09.751909 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:52:09 crc kubenswrapper[4853]: E1122 07:52:09.757143 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:52:10 crc kubenswrapper[4853]: I1122 07:52:10.219203 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" containerID="cri-o://0e003a69a0e991d51e41353ef249892b756dc703253c447166b6a6ebeafb41ba" gracePeriod=604795 Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.047424 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3943-account-create-4vld8"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.057870 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4p4mm"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.067924 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jt4hx"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.077935 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3943-account-create-4vld8"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.088357 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jt4hx"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.098795 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4p4mm"] Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.404440 4853 generic.go:334] "Generic (PLEG): container finished" podID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerID="9c5bd95c35228d58bf34b8225ce8dbcb5740ed4739cc97152d70ae88d49e62d7" exitCode=0 Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.404525 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerDied","Data":"9c5bd95c35228d58bf34b8225ce8dbcb5740ed4739cc97152d70ae88d49e62d7"} Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.410248 4853 generic.go:334] "Generic (PLEG): container finished" podID="d0e9072b-3e2a-4283-a697-8411049c5161" containerID="0e003a69a0e991d51e41353ef249892b756dc703253c447166b6a6ebeafb41ba" exitCode=0 Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.410306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerDied","Data":"0e003a69a0e991d51e41353ef249892b756dc703253c447166b6a6ebeafb41ba"} Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.763714 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b40a89d-79b8-4428-99b7-a0d79520e8b8" path="/var/lib/kubelet/pods/1b40a89d-79b8-4428-99b7-a0d79520e8b8/volumes" Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.764932 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2e23ab-b228-4a69-866d-f16a8d51966a" path="/var/lib/kubelet/pods/8b2e23ab-b228-4a69-866d-f16a8d51966a/volumes" Nov 22 07:52:17 crc kubenswrapper[4853]: I1122 07:52:17.765684 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edece509-f388-43e4-b8e8-c6bce0659954" path="/var/lib/kubelet/pods/edece509-f388-43e4-b8e8-c6bce0659954/volumes" Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.031471 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-caba-account-create-ltq46"] Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.048194 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-caba-account-create-ltq46"] Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.060022 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7dn2k"] Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.071918 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-lgnfr"] Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.082074 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-lgnfr"] Nov 22 07:52:18 crc kubenswrapper[4853]: I1122 07:52:18.091503 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7dn2k"] Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.039653 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-48f1-account-create-vlh5d"] Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.059213 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-80dd-account-create-4fgwn"] Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.076060 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-48f1-account-create-vlh5d"] Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.088885 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-80dd-account-create-4fgwn"] Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.223072 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.584206 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.769517 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cb597cd-e80d-468d-8d85-ab34391e70c6" path="/var/lib/kubelet/pods/0cb597cd-e80d-468d-8d85-ab34391e70c6/volumes" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.770656 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f50975-476a-4fd0-b6dd-5195dfad3931" path="/var/lib/kubelet/pods/10f50975-476a-4fd0-b6dd-5195dfad3931/volumes" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.797527 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976dce54-751c-4418-9fc8-5ae4340d347f" path="/var/lib/kubelet/pods/976dce54-751c-4418-9fc8-5ae4340d347f/volumes" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.799189 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c662e6b6-1204-4d05-9b6d-b1d0c9afc613" path="/var/lib/kubelet/pods/c662e6b6-1204-4d05-9b6d-b1d0c9afc613/volumes" Nov 22 07:52:19 crc kubenswrapper[4853]: I1122 07:52:19.800007 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb6698be-a947-4b69-9312-cd3382abefe9" path="/var/lib/kubelet/pods/cb6698be-a947-4b69-9312-cd3382abefe9/volumes" Nov 22 07:52:20 crc kubenswrapper[4853]: I1122 07:52:20.748637 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:52:20 crc kubenswrapper[4853]: E1122 07:52:20.749526 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.442826 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.447241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.449661 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.472648 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.639858 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bns6\" (UniqueName: \"kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.639999 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.640047 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.640113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.640187 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.640253 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.640297 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743227 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bns6\" (UniqueName: \"kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743592 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743630 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743670 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743733 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743798 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.743831 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.744828 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.745059 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.745071 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.745125 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.745208 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.745721 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.766809 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bns6\" (UniqueName: \"kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6\") pod \"dnsmasq-dns-594cb89c79-tmrtc\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:25 crc kubenswrapper[4853]: I1122 07:52:25.770971 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:31 crc kubenswrapper[4853]: I1122 07:52:31.748679 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:52:31 crc kubenswrapper[4853]: E1122 07:52:31.749514 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.223455 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.389811 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.390304 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.390515 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66h88h57ch5c7h686hbch89hffh64dh65h5cdh5cdhc8h65bh64h644h667h685h5c7h659h5ddh695h645h5ch67bh5ffh5d5h64h666h596hfch664q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bncrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(58a7dcf9-4712-4ffe-90d1-ea827dc02982): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.583196 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: i/o timeout" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.772233 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.772296 4853 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.772449 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb9kr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-sbcxc_openstack(713a48af-8f99-42ce-ba64-25dd0645ef66): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:52:34 crc kubenswrapper[4853]: E1122 07:52:34.785669 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-sbcxc" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.873773 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.902709 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977218 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977316 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977394 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977536 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977594 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977652 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrrz9\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.977703 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.984089 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.985915 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.986384 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.991008 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.991079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.991138 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls\") pod \"d0e9072b-3e2a-4283-a697-8411049c5161\" (UID: \"d0e9072b-3e2a-4283-a697-8411049c5161\") " Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.993235 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.993283 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.993298 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.993944 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9" (OuterVolumeSpecName: "kube-api-access-rrrz9") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "kube-api-access-rrrz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:34 crc kubenswrapper[4853]: I1122 07:52:34.999252 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.017861 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.023857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.024645 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info" (OuterVolumeSpecName: "pod-info") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.095955 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096178 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096326 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096370 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096397 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096434 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096537 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096647 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.096721 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qmpc\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc\") pod \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\" (UID: \"2eadd806-7143-46ba-9e49-f19ac0bd52bd\") " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.097233 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.097252 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrrz9\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-kube-api-access-rrrz9\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.097264 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d0e9072b-3e2a-4283-a697-8411049c5161-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.097276 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d0e9072b-3e2a-4283-a697-8411049c5161-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.097286 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.098678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf" (OuterVolumeSpecName: "server-conf") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.100363 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.100524 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.101271 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.112685 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.113112 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info" (OuterVolumeSpecName: "pod-info") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.121309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.122999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc" (OuterVolumeSpecName: "kube-api-access-6qmpc") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "kube-api-access-6qmpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.127075 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.164451 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data" (OuterVolumeSpecName: "config-data") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.180137 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201118 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201165 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201178 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qmpc\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-kube-api-access-6qmpc\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201187 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201197 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201205 4853 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2eadd806-7143-46ba-9e49-f19ac0bd52bd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201212 4853 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2eadd806-7143-46ba-9e49-f19ac0bd52bd-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201221 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201251 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201259 4853 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.201268 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d0e9072b-3e2a-4283-a697-8411049c5161-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.215827 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data" (OuterVolumeSpecName: "config-data") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.244606 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.299883 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf" (OuterVolumeSpecName: "server-conf") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.304376 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.304436 4853 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2eadd806-7143-46ba-9e49-f19ac0bd52bd-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.304448 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.310344 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.311084 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d0e9072b-3e2a-4283-a697-8411049c5161" (UID: "d0e9072b-3e2a-4283-a697-8411049c5161"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.391959 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2eadd806-7143-46ba-9e49-f19ac0bd52bd" (UID: "2eadd806-7143-46ba-9e49-f19ac0bd52bd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.409077 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d0e9072b-3e2a-4283-a697-8411049c5161-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.409344 4853 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2eadd806-7143-46ba-9e49-f19ac0bd52bd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.658878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d0e9072b-3e2a-4283-a697-8411049c5161","Type":"ContainerDied","Data":"c893e7dea54c22ee2e3e927dddb2d5c817aa9a13cfb1fe06046ef8056969f7c3"} Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.658906 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.658944 4853 scope.go:117] "RemoveContainer" containerID="0e003a69a0e991d51e41353ef249892b756dc703253c447166b6a6ebeafb41ba" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.662332 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"e70c75fb9c519f30db8e77ab96277d4b76e1ee3e9f801786929a56ae5fc5c76a"} Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.664831 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2eadd806-7143-46ba-9e49-f19ac0bd52bd","Type":"ContainerDied","Data":"42d1f780e47b048f344df7fb59498fd07ad6fa0b397ff050b0167cc142292cd1"} Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.664883 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.668462 4853 generic.go:334] "Generic (PLEG): container finished" podID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerID="7a1f1a797a5efd47210c483d8f094077c609a992b407f25bf369b9a6df003e87" exitCode=0 Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.669421 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" event={"ID":"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2","Type":"ContainerDied","Data":"7a1f1a797a5efd47210c483d8f094077c609a992b407f25bf369b9a6df003e87"} Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.669488 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" event={"ID":"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2","Type":"ContainerStarted","Data":"cf2ebf004dcacc3a79f1038c004e82404d365cdb7bf3d0953a24983dd64633af"} Nov 22 07:52:35 crc kubenswrapper[4853]: E1122 07:52:35.694120 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-sbcxc" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.701360 4853 scope.go:117] "RemoveContainer" containerID="191995656bf4f31e2276dad55fca2b424abcadafb5511c17ace128a41f95ec41" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.889374 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.901934 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.919458 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.930774 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.941483 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:35 crc kubenswrapper[4853]: E1122 07:52:35.942864 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.942941 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: E1122 07:52:35.943032 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="setup-container" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.943096 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="setup-container" Nov 22 07:52:35 crc kubenswrapper[4853]: E1122 07:52:35.943159 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.943214 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: E1122 07:52:35.943288 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="setup-container" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.943349 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="setup-container" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.943651 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.943741 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" containerName="rabbitmq" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.945363 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.961529 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.961730 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.961765 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.961897 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-gz4cf" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.962086 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.962235 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 07:52:35 crc kubenswrapper[4853]: I1122 07:52:35.963973 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.047972 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.050487 4853 scope.go:117] "RemoveContainer" containerID="9c5bd95c35228d58bf34b8225ce8dbcb5740ed4739cc97152d70ae88d49e62d7" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.106793 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.116761 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.128874 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.138867 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.139327 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.141476 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.142908 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tjmbv" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.144620 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.145541 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.149916 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2db00bbf-b98a-40ab-b648-5acdcc430bad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.150124 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.150346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.150448 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.150728 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2db00bbf-b98a-40ab-b648-5acdcc430bad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.151946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.152242 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.152298 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.152437 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.152488 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx4pb\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-kube-api-access-jx4pb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.152556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.208454 4853 scope.go:117] "RemoveContainer" containerID="8e8749dd25d8b57e51e1b4ef9317ecadcde4606ab344737ff6cd9ad213c23386" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.235434 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256065 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256084 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256152 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx4pb\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-kube-api-access-jx4pb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.256183 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257098 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257602 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2db00bbf-b98a-40ab-b648-5acdcc430bad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257660 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257731 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8897740c-fa9f-4ecb-83ae-4dc74489745d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257822 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257857 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257910 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257936 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2db00bbf-b98a-40ab-b648-5acdcc430bad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.257987 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.258011 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnb4b\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-kube-api-access-rnb4b\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.258078 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.258113 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.258143 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.258175 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.260369 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8897740c-fa9f-4ecb-83ae-4dc74489745d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.260439 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.260485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.260586 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.261381 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2db00bbf-b98a-40ab-b648-5acdcc430bad-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.261814 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2db00bbf-b98a-40ab-b648-5acdcc430bad-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.262204 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.262916 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.265366 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.275271 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2db00bbf-b98a-40ab-b648-5acdcc430bad-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.277446 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2db00bbf-b98a-40ab-b648-5acdcc430bad-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.290952 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx4pb\" (UniqueName: \"kubernetes.io/projected/2db00bbf-b98a-40ab-b648-5acdcc430bad-kube-api-access-jx4pb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.298980 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2db00bbf-b98a-40ab-b648-5acdcc430bad\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.352513 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363546 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnb4b\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-kube-api-access-rnb4b\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363594 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363612 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8897740c-fa9f-4ecb-83ae-4dc74489745d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363663 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363871 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.363965 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8897740c-fa9f-4ecb-83ae-4dc74489745d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.364010 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.366202 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.366436 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.367160 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-config-data\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.368213 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8897740c-fa9f-4ecb-83ae-4dc74489745d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.369976 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.370068 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.370963 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.371447 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.376402 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8897740c-fa9f-4ecb-83ae-4dc74489745d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.377432 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8897740c-fa9f-4ecb-83ae-4dc74489745d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.392281 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnb4b\" (UniqueName: \"kubernetes.io/projected/8897740c-fa9f-4ecb-83ae-4dc74489745d-kube-api-access-rnb4b\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.422192 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8897740c-fa9f-4ecb-83ae-4dc74489745d\") " pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.500241 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.703600 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" event={"ID":"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2","Type":"ContainerStarted","Data":"13155b2038e6ccf6690a3e9dbecef5d0a44f01103e135d001a89d36b10a89d21"} Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.704099 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.709544 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"35f192b2f7f4f123222ff4589dde453f9267432c2d0a6c6ed9cb668ff232bd23"} Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.727953 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" podStartSLOduration=11.727931526 podStartE2EDuration="11.727931526s" podCreationTimestamp="2025-11-22 07:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:52:36.723067986 +0000 UTC m=+2555.563690632" watchObservedRunningTime="2025-11-22 07:52:36.727931526 +0000 UTC m=+2555.568554152" Nov 22 07:52:36 crc kubenswrapper[4853]: I1122 07:52:36.878889 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:52:37 crc kubenswrapper[4853]: I1122 07:52:37.058285 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:52:37 crc kubenswrapper[4853]: I1122 07:52:37.724711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2db00bbf-b98a-40ab-b648-5acdcc430bad","Type":"ContainerStarted","Data":"65ab7bc3bc9ed6d130d89b0fd6fe4ca939e33ada43b48b7e5897b32065da8375"} Nov 22 07:52:37 crc kubenswrapper[4853]: I1122 07:52:37.727867 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8897740c-fa9f-4ecb-83ae-4dc74489745d","Type":"ContainerStarted","Data":"b56a28b24a1e41c86b7d6ba6a6b5b1ef859ccdbad6d3dff3af57454019a3b5c2"} Nov 22 07:52:37 crc kubenswrapper[4853]: I1122 07:52:37.772319 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eadd806-7143-46ba-9e49-f19ac0bd52bd" path="/var/lib/kubelet/pods/2eadd806-7143-46ba-9e49-f19ac0bd52bd/volumes" Nov 22 07:52:37 crc kubenswrapper[4853]: I1122 07:52:37.773403 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e9072b-3e2a-4283-a697-8411049c5161" path="/var/lib/kubelet/pods/d0e9072b-3e2a-4283-a697-8411049c5161/volumes" Nov 22 07:52:38 crc kubenswrapper[4853]: E1122 07:52:38.839594 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.410264 4853 scope.go:117] "RemoveContainer" containerID="915d440089db73eb2d99883ce7e639d4b34362febadb1b2dcadd7f233f724afc" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.457029 4853 scope.go:117] "RemoveContainer" containerID="915327cc8865341dc97386fd7f4ebeb4cea536bf7051d5ea872199c547bc5844" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.525636 4853 scope.go:117] "RemoveContainer" containerID="dffc576b2a9f3dea8dda6a5e835de5cfc9795ae112e7807c9965766116b99569" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.568700 4853 scope.go:117] "RemoveContainer" containerID="848335d0ad529a5c668173cc96d09080dfc7c9290a39d88ea7ef87c0c00c6817" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.626920 4853 scope.go:117] "RemoveContainer" containerID="c1adc43161a657395b67cc559c53c829491e0cc513cd2949727c834c39766390" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.712286 4853 scope.go:117] "RemoveContainer" containerID="cc8617f03d625b5c1b6962819712d24a356c6e2363c4b3bbf38041fe6dbac4cf" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.774296 4853 scope.go:117] "RemoveContainer" containerID="677ddc2c25334218fd8b8016ea3bc764045d12837a29f5c16ed48b53c2a39fcf" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.781462 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"18db04a32f5985c90756354832bf97af6c7631b8bbc58d64ac27d03859f8f909"} Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.781645 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.823922 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2db00bbf-b98a-40ab-b648-5acdcc430bad","Type":"ContainerStarted","Data":"41e6a633ae00bec1e7d8e0e2217a7519e4a12886890ddfc2d62a6085ebbf4125"} Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.827254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8897740c-fa9f-4ecb-83ae-4dc74489745d","Type":"ContainerStarted","Data":"3e2ba9c5d4c1ee640f5df5bd50ac9f25c2dfbba0c82fa0b91935d9967fa4efee"} Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.836177 4853 scope.go:117] "RemoveContainer" containerID="a3861ced43ef558639f77a20e56162a89c124dd3bbfd3b4e531cc643f8fdcea1" Nov 22 07:52:39 crc kubenswrapper[4853]: E1122 07:52:39.836230 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" Nov 22 07:52:39 crc kubenswrapper[4853]: I1122 07:52:39.911692 4853 scope.go:117] "RemoveContainer" containerID="dafb7503c934853811547f03915e27676375f48e68f08dd7036038b32f63db99" Nov 22 07:52:40 crc kubenswrapper[4853]: E1122 07:52:40.841681 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" Nov 22 07:52:45 crc kubenswrapper[4853]: I1122 07:52:45.771936 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:45 crc kubenswrapper[4853]: I1122 07:52:45.845769 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:52:45 crc kubenswrapper[4853]: I1122 07:52:45.846279 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="dnsmasq-dns" containerID="cri-o://0e14c3c05834d6313835abc55ebde8795a38b5a094dfb8b1553c60fdb5555ad0" gracePeriod=10 Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.013046 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-jz5vh"] Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.023964 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042093 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042802 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bksxk\" (UniqueName: \"kubernetes.io/projected/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-kube-api-access-bksxk\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042848 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-config\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.042895 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.050013 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.052337 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-jz5vh"] Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.152250 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-config\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.152790 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.153990 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.154135 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.154327 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.154493 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.154603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bksxk\" (UniqueName: \"kubernetes.io/projected/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-kube-api-access-bksxk\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.153458 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-config\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.155669 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-svc\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.156346 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-dns-swift-storage-0\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.156801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-openstack-edpm-ipam\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.153798 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-nb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.157767 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-ovsdbserver-sb\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.189737 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bksxk\" (UniqueName: \"kubernetes.io/projected/c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2-kube-api-access-bksxk\") pod \"dnsmasq-dns-5596c69fcc-jz5vh\" (UID: \"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2\") " pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.408093 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.748688 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:52:46 crc kubenswrapper[4853]: E1122 07:52:46.749031 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.929930 4853 generic.go:334] "Generic (PLEG): container finished" podID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerID="0e14c3c05834d6313835abc55ebde8795a38b5a094dfb8b1553c60fdb5555ad0" exitCode=0 Nov 22 07:52:46 crc kubenswrapper[4853]: I1122 07:52:46.929994 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerDied","Data":"0e14c3c05834d6313835abc55ebde8795a38b5a094dfb8b1553c60fdb5555ad0"} Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.198638 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.329466 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.329817 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz6sn\" (UniqueName: \"kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.330006 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.330067 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.330133 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.330186 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb\") pod \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\" (UID: \"d8982e8e-d6aa-4588-873e-a1853d2b1ff4\") " Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.336170 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5596c69fcc-jz5vh"] Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.353052 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn" (OuterVolumeSpecName: "kube-api-access-bz6sn") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "kube-api-access-bz6sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.421166 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.426980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.428069 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.432782 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.432813 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.432825 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.432834 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz6sn\" (UniqueName: \"kubernetes.io/projected/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-kube-api-access-bz6sn\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.433576 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config" (OuterVolumeSpecName: "config") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.434229 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8982e8e-d6aa-4588-873e-a1853d2b1ff4" (UID: "d8982e8e-d6aa-4588-873e-a1853d2b1ff4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.534782 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.534821 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8982e8e-d6aa-4588-873e-a1853d2b1ff4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.945294 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.945275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d99f6bc7f-97bjc" event={"ID":"d8982e8e-d6aa-4588-873e-a1853d2b1ff4","Type":"ContainerDied","Data":"9299d064e9a29309a1f502edd581f553ac2be79e583d99c8cfa9f30877d096c7"} Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.945712 4853 scope.go:117] "RemoveContainer" containerID="0e14c3c05834d6313835abc55ebde8795a38b5a094dfb8b1553c60fdb5555ad0" Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.947765 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2" containerID="db9d14ba1ba1d1fe7aa031aa5ca1c6c64874ffbc5353d13fae7e027c7c5b62d8" exitCode=0 Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.947798 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" event={"ID":"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2","Type":"ContainerDied","Data":"db9d14ba1ba1d1fe7aa031aa5ca1c6c64874ffbc5353d13fae7e027c7c5b62d8"} Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.947823 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" event={"ID":"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2","Type":"ContainerStarted","Data":"f20dbd5737243b2eb42c4dcc729f513e25407ab7a5fd7326ef0e65f50ada34bf"} Nov 22 07:52:47 crc kubenswrapper[4853]: I1122 07:52:47.992841 4853 scope.go:117] "RemoveContainer" containerID="58684fe89d5563d5a54db3adaeff006980bae1933d4f59db5002019d9431936f" Nov 22 07:52:48 crc kubenswrapper[4853]: I1122 07:52:48.014165 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:52:48 crc kubenswrapper[4853]: I1122 07:52:48.023313 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d99f6bc7f-97bjc"] Nov 22 07:52:48 crc kubenswrapper[4853]: I1122 07:52:48.980134 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" event={"ID":"c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2","Type":"ContainerStarted","Data":"83f7b4b912b00cd9dd5a00f54d71e19fc4f9d776fe74dec29a7b550222331852"} Nov 22 07:52:48 crc kubenswrapper[4853]: I1122 07:52:48.980642 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:49 crc kubenswrapper[4853]: I1122 07:52:49.015593 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" podStartSLOduration=4.015574472 podStartE2EDuration="4.015574472s" podCreationTimestamp="2025-11-22 07:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:52:49.005611014 +0000 UTC m=+2567.846233640" watchObservedRunningTime="2025-11-22 07:52:49.015574472 +0000 UTC m=+2567.856197098" Nov 22 07:52:49 crc kubenswrapper[4853]: I1122 07:52:49.773723 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" path="/var/lib/kubelet/pods/d8982e8e-d6aa-4588-873e-a1853d2b1ff4/volumes" Nov 22 07:52:51 crc kubenswrapper[4853]: I1122 07:52:51.010290 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sbcxc" event={"ID":"713a48af-8f99-42ce-ba64-25dd0645ef66","Type":"ContainerStarted","Data":"8cf21df1c11c16275c31d8ffdfad399208afd7429ea1d259e91c3b1bbc70cb0d"} Nov 22 07:52:51 crc kubenswrapper[4853]: I1122 07:52:51.035513 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-sbcxc" podStartSLOduration=2.075309452 podStartE2EDuration="51.035488269s" podCreationTimestamp="2025-11-22 07:52:00 +0000 UTC" firstStartedPulling="2025-11-22 07:52:01.01300328 +0000 UTC m=+2519.853625906" lastFinishedPulling="2025-11-22 07:52:49.973182097 +0000 UTC m=+2568.813804723" observedRunningTime="2025-11-22 07:52:51.02773407 +0000 UTC m=+2569.868356696" watchObservedRunningTime="2025-11-22 07:52:51.035488269 +0000 UTC m=+2569.876110895" Nov 22 07:52:51 crc kubenswrapper[4853]: I1122 07:52:51.767902 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:52:53 crc kubenswrapper[4853]: I1122 07:52:53.035758 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"2ba3c6e6d3f2f9e73e1bf4340dd1dd9ce1ae870dfb4e51a567cc99925348540c"} Nov 22 07:52:53 crc kubenswrapper[4853]: I1122 07:52:53.070828 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494351012 podStartE2EDuration="46.070788729s" podCreationTimestamp="2025-11-22 07:52:07 +0000 UTC" firstStartedPulling="2025-11-22 07:52:08.752955835 +0000 UTC m=+2527.593578461" lastFinishedPulling="2025-11-22 07:52:52.329393552 +0000 UTC m=+2571.170016178" observedRunningTime="2025-11-22 07:52:53.062193388 +0000 UTC m=+2571.902816014" watchObservedRunningTime="2025-11-22 07:52:53.070788729 +0000 UTC m=+2571.911411355" Nov 22 07:52:54 crc kubenswrapper[4853]: I1122 07:52:54.050396 4853 generic.go:334] "Generic (PLEG): container finished" podID="713a48af-8f99-42ce-ba64-25dd0645ef66" containerID="8cf21df1c11c16275c31d8ffdfad399208afd7429ea1d259e91c3b1bbc70cb0d" exitCode=0 Nov 22 07:52:54 crc kubenswrapper[4853]: I1122 07:52:54.050483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sbcxc" event={"ID":"713a48af-8f99-42ce-ba64-25dd0645ef66","Type":"ContainerDied","Data":"8cf21df1c11c16275c31d8ffdfad399208afd7429ea1d259e91c3b1bbc70cb0d"} Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.556508 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.661510 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb9kr\" (UniqueName: \"kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr\") pod \"713a48af-8f99-42ce-ba64-25dd0645ef66\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.661561 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data\") pod \"713a48af-8f99-42ce-ba64-25dd0645ef66\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.661708 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle\") pod \"713a48af-8f99-42ce-ba64-25dd0645ef66\" (UID: \"713a48af-8f99-42ce-ba64-25dd0645ef66\") " Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.674121 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr" (OuterVolumeSpecName: "kube-api-access-mb9kr") pod "713a48af-8f99-42ce-ba64-25dd0645ef66" (UID: "713a48af-8f99-42ce-ba64-25dd0645ef66"). InnerVolumeSpecName "kube-api-access-mb9kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.701180 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "713a48af-8f99-42ce-ba64-25dd0645ef66" (UID: "713a48af-8f99-42ce-ba64-25dd0645ef66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.763253 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data" (OuterVolumeSpecName: "config-data") pod "713a48af-8f99-42ce-ba64-25dd0645ef66" (UID: "713a48af-8f99-42ce-ba64-25dd0645ef66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.765599 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.766035 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb9kr\" (UniqueName: \"kubernetes.io/projected/713a48af-8f99-42ce-ba64-25dd0645ef66-kube-api-access-mb9kr\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:55 crc kubenswrapper[4853]: I1122 07:52:55.766055 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/713a48af-8f99-42ce-ba64-25dd0645ef66-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.075254 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-sbcxc" event={"ID":"713a48af-8f99-42ce-ba64-25dd0645ef66","Type":"ContainerDied","Data":"3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2"} Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.075293 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c2a6ab62ddc2dd17a2d0feda0f02a91b8ebaf232719c2c86cc17b9960a98ea2" Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.075799 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-sbcxc" Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.410091 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5596c69fcc-jz5vh" Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.481473 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:56 crc kubenswrapper[4853]: I1122 07:52:56.481724 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="dnsmasq-dns" containerID="cri-o://13155b2038e6ccf6690a3e9dbecef5d0a44f01103e135d001a89d36b10a89d21" gracePeriod=10 Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.238367 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-697c44f7b5-9vpfm"] Nov 22 07:52:57 crc kubenswrapper[4853]: E1122 07:52:57.238928 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" containerName="heat-db-sync" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.238945 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" containerName="heat-db-sync" Nov 22 07:52:57 crc kubenswrapper[4853]: E1122 07:52:57.238956 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="init" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.238962 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="init" Nov 22 07:52:57 crc kubenswrapper[4853]: E1122 07:52:57.238993 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="dnsmasq-dns" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.239000 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="dnsmasq-dns" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.239262 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" containerName="heat-db-sync" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.239286 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8982e8e-d6aa-4588-873e-a1853d2b1ff4" containerName="dnsmasq-dns" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.240242 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.255447 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-697c44f7b5-9vpfm"] Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.283107 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d654b9979-5pkjs"] Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.285951 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307117 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-internal-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307172 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data-custom\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307205 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307257 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-public-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307283 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86227\" (UniqueName: \"kubernetes.io/projected/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-kube-api-access-86227\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307346 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-combined-ca-bundle\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307435 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307463 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-combined-ca-bundle\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307533 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwndd\" (UniqueName: \"kubernetes.io/projected/f5b4c3b6-9c73-4976-b412-341704301db3-kube-api-access-nwndd\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.307570 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data-custom\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.352139 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d654b9979-5pkjs"] Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.392842 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-c66cc79fb-w5kgp"] Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.398062 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.414113 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c66cc79fb-w5kgp"] Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.416297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwndd\" (UniqueName: \"kubernetes.io/projected/f5b4c3b6-9c73-4976-b412-341704301db3-kube-api-access-nwndd\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.416408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data-custom\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.416603 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417041 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-combined-ca-bundle\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417122 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-internal-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-internal-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417477 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data-custom\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417577 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-public-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417699 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-public-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.417741 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86227\" (UniqueName: \"kubernetes.io/projected/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-kube-api-access-86227\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.418133 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-combined-ca-bundle\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.418240 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chs2m\" (UniqueName: \"kubernetes.io/projected/0bdc440c-227d-43dd-9e9d-500ba10fc239-kube-api-access-chs2m\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.418287 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data-custom\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.418830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.418929 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-combined-ca-bundle\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.428001 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data-custom\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.431371 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-internal-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.442501 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86227\" (UniqueName: \"kubernetes.io/projected/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-kube-api-access-86227\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.452770 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwndd\" (UniqueName: \"kubernetes.io/projected/f5b4c3b6-9c73-4976-b412-341704301db3-kube-api-access-nwndd\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.506302 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-public-tls-certs\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.506872 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-combined-ca-bundle\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.510895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-combined-ca-bundle\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.511267 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data-custom\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.511482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb5c6ce8-f8af-4ad3-a004-04c188ba6c92-config-data\") pod \"heat-api-6d654b9979-5pkjs\" (UID: \"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92\") " pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.511979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5b4c3b6-9c73-4976-b412-341704301db3-config-data\") pod \"heat-engine-697c44f7b5-9vpfm\" (UID: \"f5b4c3b6-9c73-4976-b412-341704301db3\") " pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.527737 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.527797 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-combined-ca-bundle\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.528242 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-internal-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.528308 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-public-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.528385 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data-custom\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.528403 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chs2m\" (UniqueName: \"kubernetes.io/projected/0bdc440c-227d-43dd-9e9d-500ba10fc239-kube-api-access-chs2m\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.532521 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-combined-ca-bundle\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.533885 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-public-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.534403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.534482 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-config-data-custom\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.535141 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0bdc440c-227d-43dd-9e9d-500ba10fc239-internal-tls-certs\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.545852 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chs2m\" (UniqueName: \"kubernetes.io/projected/0bdc440c-227d-43dd-9e9d-500ba10fc239-kube-api-access-chs2m\") pod \"heat-cfnapi-c66cc79fb-w5kgp\" (UID: \"0bdc440c-227d-43dd-9e9d-500ba10fc239\") " pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.567678 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.609457 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:52:57 crc kubenswrapper[4853]: I1122 07:52:57.732636 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.108387 4853 generic.go:334] "Generic (PLEG): container finished" podID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerID="13155b2038e6ccf6690a3e9dbecef5d0a44f01103e135d001a89d36b10a89d21" exitCode=0 Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.108474 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" event={"ID":"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2","Type":"ContainerDied","Data":"13155b2038e6ccf6690a3e9dbecef5d0a44f01103e135d001a89d36b10a89d21"} Nov 22 07:52:58 crc kubenswrapper[4853]: W1122 07:52:58.201483 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb5c6ce8_f8af_4ad3_a004_04c188ba6c92.slice/crio-8c3052828f680ecc62bb2e12784a7f6146dfb8684960ed237018d282e59cf11f WatchSource:0}: Error finding container 8c3052828f680ecc62bb2e12784a7f6146dfb8684960ed237018d282e59cf11f: Status 404 returned error can't find the container with id 8c3052828f680ecc62bb2e12784a7f6146dfb8684960ed237018d282e59cf11f Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.204275 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d654b9979-5pkjs"] Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.299305 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-697c44f7b5-9vpfm"] Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.431336 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-c66cc79fb-w5kgp"] Nov 22 07:52:58 crc kubenswrapper[4853]: W1122 07:52:58.451288 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bdc440c_227d_43dd_9e9d_500ba10fc239.slice/crio-e29f71215baea14c8fc0b86b1c455c327c66a76b39659d0e20be2530e0b33143 WatchSource:0}: Error finding container e29f71215baea14c8fc0b86b1c455c327c66a76b39659d0e20be2530e0b33143: Status 404 returned error can't find the container with id e29f71215baea14c8fc0b86b1c455c327c66a76b39659d0e20be2530e0b33143 Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.949902 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.972922 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973069 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973107 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973145 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973227 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973257 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bns6\" (UniqueName: \"kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:58 crc kubenswrapper[4853]: I1122 07:52:58.973294 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb\") pod \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\" (UID: \"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2\") " Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.028009 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6" (OuterVolumeSpecName: "kube-api-access-8bns6") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "kube-api-access-8bns6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.038845 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.077486 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bns6\" (UniqueName: \"kubernetes.io/projected/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-kube-api-access-8bns6\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.077515 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.090394 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.095531 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config" (OuterVolumeSpecName: "config") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.126905 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.138056 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" event={"ID":"726a4e37-efbe-463e-b9a6-5fd93a1f0dc2","Type":"ContainerDied","Data":"cf2ebf004dcacc3a79f1038c004e82404d365cdb7bf3d0953a24983dd64633af"} Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.138182 4853 scope.go:117] "RemoveContainer" containerID="13155b2038e6ccf6690a3e9dbecef5d0a44f01103e135d001a89d36b10a89d21" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.138375 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594cb89c79-tmrtc" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.143071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-697c44f7b5-9vpfm" event={"ID":"f5b4c3b6-9c73-4976-b412-341704301db3","Type":"ContainerStarted","Data":"10058a51997dc3d8b46e09c0d38ed7e387cacd802147635b2f8210635d149bc0"} Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.143127 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-697c44f7b5-9vpfm" event={"ID":"f5b4c3b6-9c73-4976-b412-341704301db3","Type":"ContainerStarted","Data":"5038a67c1c8c9480e8f4cfb5272b7fa49be8ec3dc680ab7f604329b01d89382a"} Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.143949 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.145636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d654b9979-5pkjs" event={"ID":"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92","Type":"ContainerStarted","Data":"8c3052828f680ecc62bb2e12784a7f6146dfb8684960ed237018d282e59cf11f"} Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.147002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" event={"ID":"0bdc440c-227d-43dd-9e9d-500ba10fc239","Type":"ContainerStarted","Data":"e29f71215baea14c8fc0b86b1c455c327c66a76b39659d0e20be2530e0b33143"} Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.157920 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.168648 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" (UID: "726a4e37-efbe-463e-b9a6-5fd93a1f0dc2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.171893 4853 scope.go:117] "RemoveContainer" containerID="7a1f1a797a5efd47210c483d8f094077c609a992b407f25bf369b9a6df003e87" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.175796 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-697c44f7b5-9vpfm" podStartSLOduration=2.175775758 podStartE2EDuration="2.175775758s" podCreationTimestamp="2025-11-22 07:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:52:59.166864617 +0000 UTC m=+2578.007487243" watchObservedRunningTime="2025-11-22 07:52:59.175775758 +0000 UTC m=+2578.016398374" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.180249 4853 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.180276 4853 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.180288 4853 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.180413 4853 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.180424 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.472812 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.488057 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594cb89c79-tmrtc"] Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.751419 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:52:59 crc kubenswrapper[4853]: E1122 07:52:59.752042 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:52:59 crc kubenswrapper[4853]: I1122 07:52:59.766514 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" path="/var/lib/kubelet/pods/726a4e37-efbe-463e-b9a6-5fd93a1f0dc2/volumes" Nov 22 07:53:03 crc kubenswrapper[4853]: I1122 07:53:03.210174 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d654b9979-5pkjs" event={"ID":"cb5c6ce8-f8af-4ad3-a004-04c188ba6c92","Type":"ContainerStarted","Data":"64790835ff7c8b62c7db77a13951cd25fa0abc342ab6879b68fdf54bc69c2d89"} Nov 22 07:53:03 crc kubenswrapper[4853]: I1122 07:53:03.210714 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:53:03 crc kubenswrapper[4853]: I1122 07:53:03.212157 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" event={"ID":"0bdc440c-227d-43dd-9e9d-500ba10fc239","Type":"ContainerStarted","Data":"0d2f9229ea9e8ee154e0f443167693bd9f82509a3eb0206eb8df6dffec7ba336"} Nov 22 07:53:03 crc kubenswrapper[4853]: I1122 07:53:03.212321 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:53:03 crc kubenswrapper[4853]: I1122 07:53:03.242164 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d654b9979-5pkjs" podStartSLOduration=2.981823334 podStartE2EDuration="6.242138695s" podCreationTimestamp="2025-11-22 07:52:57 +0000 UTC" firstStartedPulling="2025-11-22 07:52:58.207597078 +0000 UTC m=+2577.048219704" lastFinishedPulling="2025-11-22 07:53:01.467912439 +0000 UTC m=+2580.308535065" observedRunningTime="2025-11-22 07:53:03.229629978 +0000 UTC m=+2582.070252624" watchObservedRunningTime="2025-11-22 07:53:03.242138695 +0000 UTC m=+2582.082761321" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.846997 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" podStartSLOduration=6.402532841 podStartE2EDuration="9.846970565s" podCreationTimestamp="2025-11-22 07:52:57 +0000 UTC" firstStartedPulling="2025-11-22 07:52:58.454982804 +0000 UTC m=+2577.295605430" lastFinishedPulling="2025-11-22 07:53:01.899420528 +0000 UTC m=+2580.740043154" observedRunningTime="2025-11-22 07:53:03.255279708 +0000 UTC m=+2582.095902334" watchObservedRunningTime="2025-11-22 07:53:06.846970565 +0000 UTC m=+2585.687593191" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.869825 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx"] Nov 22 07:53:06 crc kubenswrapper[4853]: E1122 07:53:06.870712 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="dnsmasq-dns" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.870735 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="dnsmasq-dns" Nov 22 07:53:06 crc kubenswrapper[4853]: E1122 07:53:06.870793 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="init" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.870803 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="init" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.871096 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="726a4e37-efbe-463e-b9a6-5fd93a1f0dc2" containerName="dnsmasq-dns" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.872439 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.887082 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.887288 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.887364 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.887457 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.951538 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8pt\" (UniqueName: \"kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.951640 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.951685 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.952217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:06 crc kubenswrapper[4853]: I1122 07:53:06.974827 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx"] Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.056237 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.056708 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8pt\" (UniqueName: \"kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.056784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.056814 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.063330 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.063378 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.063708 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.075550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8pt\" (UniqueName: \"kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:07 crc kubenswrapper[4853]: I1122 07:53:07.203707 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:53:11 crc kubenswrapper[4853]: I1122 07:53:11.749711 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:53:11 crc kubenswrapper[4853]: E1122 07:53:11.751371 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:53:11 crc kubenswrapper[4853]: I1122 07:53:11.951637 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx"] Nov 22 07:53:11 crc kubenswrapper[4853]: W1122 07:53:11.955159 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35498c08_898b_477d_88eb_3cf82e3696e7.slice/crio-0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b WatchSource:0}: Error finding container 0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b: Status 404 returned error can't find the container with id 0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.053344 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2vhq9"] Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.068309 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2vhq9"] Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.327401 4853 generic.go:334] "Generic (PLEG): container finished" podID="2db00bbf-b98a-40ab-b648-5acdcc430bad" containerID="41e6a633ae00bec1e7d8e0e2217a7519e4a12886890ddfc2d62a6085ebbf4125" exitCode=0 Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.327485 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2db00bbf-b98a-40ab-b648-5acdcc430bad","Type":"ContainerDied","Data":"41e6a633ae00bec1e7d8e0e2217a7519e4a12886890ddfc2d62a6085ebbf4125"} Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.329957 4853 generic.go:334] "Generic (PLEG): container finished" podID="8897740c-fa9f-4ecb-83ae-4dc74489745d" containerID="3e2ba9c5d4c1ee640f5df5bd50ac9f25c2dfbba0c82fa0b91935d9967fa4efee" exitCode=0 Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.330042 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8897740c-fa9f-4ecb-83ae-4dc74489745d","Type":"ContainerDied","Data":"3e2ba9c5d4c1ee640f5df5bd50ac9f25c2dfbba0c82fa0b91935d9967fa4efee"} Nov 22 07:53:12 crc kubenswrapper[4853]: I1122 07:53:12.332971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" event={"ID":"35498c08-898b-477d-88eb-3cf82e3696e7","Type":"ContainerStarted","Data":"0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b"} Nov 22 07:53:13 crc kubenswrapper[4853]: I1122 07:53:13.787252 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="937b4e80-b6f5-4e62-8053-05ce38b1b105" path="/var/lib/kubelet/pods/937b4e80-b6f5-4e62-8053-05ce38b1b105/volumes" Nov 22 07:53:14 crc kubenswrapper[4853]: I1122 07:53:14.358499 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2db00bbf-b98a-40ab-b648-5acdcc430bad","Type":"ContainerStarted","Data":"ed5f7e38fa15e86f98e52bd124b81be92c097297cb1211a021d62007ddbe9467"} Nov 22 07:53:14 crc kubenswrapper[4853]: I1122 07:53:14.361202 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8897740c-fa9f-4ecb-83ae-4dc74489745d","Type":"ContainerStarted","Data":"fe648769174cf495170b29a07eccf84beee3801d18c1e7dd8a5240686f0a91dc"} Nov 22 07:53:14 crc kubenswrapper[4853]: I1122 07:53:14.361587 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 07:53:14 crc kubenswrapper[4853]: I1122 07:53:14.393104 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.393086267 podStartE2EDuration="39.393086267s" podCreationTimestamp="2025-11-22 07:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:53:14.389410278 +0000 UTC m=+2593.230032994" watchObservedRunningTime="2025-11-22 07:53:14.393086267 +0000 UTC m=+2593.233708893" Nov 22 07:53:15 crc kubenswrapper[4853]: I1122 07:53:15.411298 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=40.411248051 podStartE2EDuration="40.411248051s" podCreationTimestamp="2025-11-22 07:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:53:15.39371886 +0000 UTC m=+2594.234341476" watchObservedRunningTime="2025-11-22 07:53:15.411248051 +0000 UTC m=+2594.251870677" Nov 22 07:53:16 crc kubenswrapper[4853]: I1122 07:53:16.353082 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.640061 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d654b9979-5pkjs" podUID="cb5c6ce8-f8af-4ad3-a004-04c188ba6c92" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.26:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.640057 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6d654b9979-5pkjs" podUID="cb5c6ce8-f8af-4ad3-a004-04c188ba6c92" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.26:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.784020 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" podUID="0bdc440c-227d-43dd-9e9d-500ba10fc239" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.27:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.784026 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" podUID="0bdc440c-227d-43dd-9e9d-500ba10fc239" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.27:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.827650 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-697c44f7b5-9vpfm" Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.908602 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:53:17 crc kubenswrapper[4853]: I1122 07:53:17.908874 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-846454b756-2r7vp" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" containerID="cri-o://303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" gracePeriod=60 Nov 22 07:53:23 crc kubenswrapper[4853]: E1122 07:53:23.789572 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:23 crc kubenswrapper[4853]: E1122 07:53:23.796077 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:23 crc kubenswrapper[4853]: E1122 07:53:23.798640 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:23 crc kubenswrapper[4853]: E1122 07:53:23.798714 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-846454b756-2r7vp" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:53:24 crc kubenswrapper[4853]: I1122 07:53:24.748518 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:53:24 crc kubenswrapper[4853]: E1122 07:53:24.748932 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:53:26 crc kubenswrapper[4853]: I1122 07:53:26.356165 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2db00bbf-b98a-40ab-b648-5acdcc430bad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Nov 22 07:53:26 crc kubenswrapper[4853]: I1122 07:53:26.503284 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8897740c-fa9f-4ecb-83ae-4dc74489745d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5671: connect: connection refused" Nov 22 07:53:27 crc kubenswrapper[4853]: I1122 07:53:27.648906 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-6d654b9979-5pkjs" podUID="cb5c6ce8-f8af-4ad3-a004-04c188ba6c92" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.26:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:27 crc kubenswrapper[4853]: I1122 07:53:27.648936 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d654b9979-5pkjs" podUID="cb5c6ce8-f8af-4ad3-a004-04c188ba6c92" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.26:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:27 crc kubenswrapper[4853]: I1122 07:53:27.791938 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" podUID="0bdc440c-227d-43dd-9e9d-500ba10fc239" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.27:8000/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:27 crc kubenswrapper[4853]: I1122 07:53:27.791982 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" podUID="0bdc440c-227d-43dd-9e9d-500ba10fc239" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.27:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.057935 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-c66cc79fb-w5kgp" Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.058945 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6d654b9979-5pkjs" Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.126242 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.126588 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" containerID="cri-o://a0a2240efd83bffb731360afe41c6667be021904b9096fcf4ba2d21449c5b662" gracePeriod=60 Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.190795 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:53:30 crc kubenswrapper[4853]: I1122 07:53:30.191042 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" containerID="cri-o://77895d132867c7e5e6a8436ef2372ee3f5927332df11fad5474e7068d2d9768e" gracePeriod=60 Nov 22 07:53:33 crc kubenswrapper[4853]: I1122 07:53:33.643322 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": read tcp 10.217.0.2:58376->10.217.0.231:8004: read: connection reset by peer" Nov 22 07:53:33 crc kubenswrapper[4853]: I1122 07:53:33.661952 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": read tcp 10.217.0.2:45944->10.217.0.232:8000: read: connection reset by peer" Nov 22 07:53:33 crc kubenswrapper[4853]: E1122 07:53:33.789165 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:33 crc kubenswrapper[4853]: E1122 07:53:33.790673 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:33 crc kubenswrapper[4853]: E1122 07:53:33.792372 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:33 crc kubenswrapper[4853]: E1122 07:53:33.792415 4853 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-846454b756-2r7vp" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:53:35 crc kubenswrapper[4853]: I1122 07:53:35.641175 4853 generic.go:334] "Generic (PLEG): container finished" podID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerID="77895d132867c7e5e6a8436ef2372ee3f5927332df11fad5474e7068d2d9768e" exitCode=0 Nov 22 07:53:35 crc kubenswrapper[4853]: I1122 07:53:35.641297 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b59697c4-2frrp" event={"ID":"5a4daab2-d15b-4492-9eea-05a2f6b753ef","Type":"ContainerDied","Data":"77895d132867c7e5e6a8436ef2372ee3f5927332df11fad5474e7068d2d9768e"} Nov 22 07:53:35 crc kubenswrapper[4853]: I1122 07:53:35.643871 4853 generic.go:334] "Generic (PLEG): container finished" podID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerID="a0a2240efd83bffb731360afe41c6667be021904b9096fcf4ba2d21449c5b662" exitCode=0 Nov 22 07:53:35 crc kubenswrapper[4853]: I1122 07:53:35.643933 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94775ccf-w92qr" event={"ID":"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a","Type":"ContainerDied","Data":"a0a2240efd83bffb731360afe41c6667be021904b9096fcf4ba2d21449c5b662"} Nov 22 07:53:36 crc kubenswrapper[4853]: I1122 07:53:36.353526 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2db00bbf-b98a-40ab-b648-5acdcc430bad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Nov 22 07:53:36 crc kubenswrapper[4853]: I1122 07:53:36.500696 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8897740c-fa9f-4ecb-83ae-4dc74489745d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5671: connect: connection refused" Nov 22 07:53:37 crc kubenswrapper[4853]: I1122 07:53:37.752676 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:53:37 crc kubenswrapper[4853]: E1122 07:53:37.753636 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:53:38 crc kubenswrapper[4853]: I1122 07:53:38.117167 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": dial tcp 10.217.0.231:8004: connect: connection refused" Nov 22 07:53:38 crc kubenswrapper[4853]: I1122 07:53:38.139034 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": dial tcp 10.217.0.232:8000: connect: connection refused" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.225981 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-bk8mb"] Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.235775 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-bk8mb"] Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.343057 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-4z67v"] Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.345055 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.348551 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.369068 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.369233 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.369372 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5wxc\" (UniqueName: \"kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.369500 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.380321 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4z67v"] Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.455137 4853 scope.go:117] "RemoveContainer" containerID="af05b69ba1759fb5694a74d89b42cb95012e46c016f6361b98b3f3c9c5d64838" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.471784 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.471934 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.472051 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.472202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5wxc\" (UniqueName: \"kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.490057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.490325 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.491077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.491550 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5wxc\" (UniqueName: \"kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc\") pod \"aodh-db-sync-4z67v\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.690700 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4z67v" Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.709263 4853 generic.go:334] "Generic (PLEG): container finished" podID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" exitCode=0 Nov 22 07:53:40 crc kubenswrapper[4853]: I1122 07:53:40.709310 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-846454b756-2r7vp" event={"ID":"4194e8cf-31be-421c-9cac-b89a8a47f004","Type":"ContainerDied","Data":"303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee"} Nov 22 07:53:41 crc kubenswrapper[4853]: I1122 07:53:41.763524 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2538cfd0-3cda-47f6-83ef-c0fab178a95c" path="/var/lib/kubelet/pods/2538cfd0-3cda-47f6-83ef-c0fab178a95c/volumes" Nov 22 07:53:43 crc kubenswrapper[4853]: I1122 07:53:43.116810 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": dial tcp 10.217.0.231:8004: connect: connection refused" Nov 22 07:53:43 crc kubenswrapper[4853]: I1122 07:53:43.117253 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:53:43 crc kubenswrapper[4853]: I1122 07:53:43.136063 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": dial tcp 10.217.0.232:8000: connect: connection refused" Nov 22 07:53:43 crc kubenswrapper[4853]: I1122 07:53:43.136168 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:53:43 crc kubenswrapper[4853]: E1122 07:53:43.788455 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:43 crc kubenswrapper[4853]: E1122 07:53:43.789036 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:43 crc kubenswrapper[4853]: E1122 07:53:43.789409 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:43 crc kubenswrapper[4853]: E1122 07:53:43.789457 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-846454b756-2r7vp" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:53:46 crc kubenswrapper[4853]: I1122 07:53:46.353543 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2db00bbf-b98a-40ab-b648-5acdcc430bad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Nov 22 07:53:46 crc kubenswrapper[4853]: I1122 07:53:46.501218 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8897740c-fa9f-4ecb-83ae-4dc74489745d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5671: connect: connection refused" Nov 22 07:53:48 crc kubenswrapper[4853]: I1122 07:53:48.133633 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": dial tcp 10.217.0.232:8000: connect: connection refused" Nov 22 07:53:48 crc kubenswrapper[4853]: I1122 07:53:48.145465 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": dial tcp 10.217.0.231:8004: connect: connection refused" Nov 22 07:53:48 crc kubenswrapper[4853]: I1122 07:53:48.747599 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:53:48 crc kubenswrapper[4853]: E1122 07:53:48.748295 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:53:53 crc kubenswrapper[4853]: I1122 07:53:53.116735 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": dial tcp 10.217.0.231:8004: connect: connection refused" Nov 22 07:53:53 crc kubenswrapper[4853]: I1122 07:53:53.133693 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": dial tcp 10.217.0.232:8000: connect: connection refused" Nov 22 07:53:53 crc kubenswrapper[4853]: E1122 07:53:53.788970 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:53 crc kubenswrapper[4853]: E1122 07:53:53.789515 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:53 crc kubenswrapper[4853]: E1122 07:53:53.790114 4853 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 07:53:53 crc kubenswrapper[4853]: E1122 07:53:53.790160 4853 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-846454b756-2r7vp" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:53:56 crc kubenswrapper[4853]: I1122 07:53:56.354140 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2db00bbf-b98a-40ab-b648-5acdcc430bad" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.22:5671: connect: connection refused" Nov 22 07:53:56 crc kubenswrapper[4853]: I1122 07:53:56.501766 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8897740c-fa9f-4ecb-83ae-4dc74489745d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.23:5671: connect: connection refused" Nov 22 07:53:57 crc kubenswrapper[4853]: E1122 07:53:57.911116 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 22 07:53:57 crc kubenswrapper[4853]: E1122 07:53:57.912578 4853 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 07:53:57 crc kubenswrapper[4853]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 22 07:53:57 crc kubenswrapper[4853]: - hosts: all Nov 22 07:53:57 crc kubenswrapper[4853]: strategy: linear Nov 22 07:53:57 crc kubenswrapper[4853]: tasks: Nov 22 07:53:57 crc kubenswrapper[4853]: - name: Enable podified-repos Nov 22 07:53:57 crc kubenswrapper[4853]: become: true Nov 22 07:53:57 crc kubenswrapper[4853]: ansible.builtin.shell: | Nov 22 07:53:57 crc kubenswrapper[4853]: set -euxo pipefail Nov 22 07:53:57 crc kubenswrapper[4853]: pushd /var/tmp Nov 22 07:53:57 crc kubenswrapper[4853]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 22 07:53:57 crc kubenswrapper[4853]: pushd repo-setup-main Nov 22 07:53:57 crc kubenswrapper[4853]: python3 -m venv ./venv Nov 22 07:53:57 crc kubenswrapper[4853]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 22 07:53:57 crc kubenswrapper[4853]: ./venv/bin/repo-setup current-podified -b antelope Nov 22 07:53:57 crc kubenswrapper[4853]: popd Nov 22 07:53:57 crc kubenswrapper[4853]: rm -rf repo-setup-main Nov 22 07:53:57 crc kubenswrapper[4853]: Nov 22 07:53:57 crc kubenswrapper[4853]: Nov 22 07:53:57 crc kubenswrapper[4853]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 22 07:53:57 crc kubenswrapper[4853]: edpm_override_hosts: openstack-edpm-ipam Nov 22 07:53:57 crc kubenswrapper[4853]: edpm_service_type: repo-setup Nov 22 07:53:57 crc kubenswrapper[4853]: Nov 22 07:53:57 crc kubenswrapper[4853]: Nov 22 07:53:57 crc kubenswrapper[4853]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl8pt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx_openstack(35498c08-898b-477d-88eb-3cf82e3696e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 22 07:53:57 crc kubenswrapper[4853]: > logger="UnhandledError" Nov 22 07:53:57 crc kubenswrapper[4853]: E1122 07:53:57.915116 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" podUID="35498c08-898b-477d-88eb-3cf82e3696e7" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.118424 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-57b59697c4-2frrp" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.231:8004/healthcheck\": dial tcp 10.217.0.231:8004: connect: connection refused" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.134365 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-94775ccf-w92qr" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.232:8000/healthcheck\": dial tcp 10.217.0.232:8000: connect: connection refused" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.734384 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.737217 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.744145 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.844829 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-4z67v"] Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863241 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data\") pod \"4194e8cf-31be-421c-9cac-b89a8a47f004\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863284 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863323 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863346 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnw84\" (UniqueName: \"kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863410 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863460 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863477 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom\") pod \"4194e8cf-31be-421c-9cac-b89a8a47f004\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863607 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863623 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.863698 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle\") pod \"4194e8cf-31be-421c-9cac-b89a8a47f004\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.864443 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data\") pod \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\" (UID: \"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.864478 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhzrq\" (UniqueName: \"kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.864521 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpkbd\" (UniqueName: \"kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd\") pod \"4194e8cf-31be-421c-9cac-b89a8a47f004\" (UID: \"4194e8cf-31be-421c-9cac-b89a8a47f004\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.864540 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.864556 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs\") pod \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\" (UID: \"5a4daab2-d15b-4492-9eea-05a2f6b753ef\") " Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.876004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq" (OuterVolumeSpecName: "kube-api-access-fhzrq") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "kube-api-access-fhzrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.882059 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.889387 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4194e8cf-31be-421c-9cac-b89a8a47f004" (UID: "4194e8cf-31be-421c-9cac-b89a8a47f004"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.889653 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84" (OuterVolumeSpecName: "kube-api-access-dnw84") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "kube-api-access-dnw84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.911083 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.945071 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd" (OuterVolumeSpecName: "kube-api-access-zpkbd") pod "4194e8cf-31be-421c-9cac-b89a8a47f004" (UID: "4194e8cf-31be-421c-9cac-b89a8a47f004"). InnerVolumeSpecName "kube-api-access-zpkbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.956393 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-94775ccf-w92qr" event={"ID":"8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a","Type":"ContainerDied","Data":"c13ae05eb09b76a3627bcd988f1ac698bebb16365a5f90fb06bde132c7ad8ab0"} Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.956451 4853 scope.go:117] "RemoveContainer" containerID="a0a2240efd83bffb731360afe41c6667be021904b9096fcf4ba2d21449c5b662" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.956684 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-94775ccf-w92qr" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.959635 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4194e8cf-31be-421c-9cac-b89a8a47f004" (UID: "4194e8cf-31be-421c-9cac-b89a8a47f004"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.961717 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-846454b756-2r7vp" event={"ID":"4194e8cf-31be-421c-9cac-b89a8a47f004","Type":"ContainerDied","Data":"c1f15078a54bd909b74dec593fe583b2aaad33e01d03599a9431a3d6ab268787"} Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.961842 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-846454b756-2r7vp" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967244 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967275 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhzrq\" (UniqueName: \"kubernetes.io/projected/5a4daab2-d15b-4492-9eea-05a2f6b753ef-kube-api-access-fhzrq\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967284 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpkbd\" (UniqueName: \"kubernetes.io/projected/4194e8cf-31be-421c-9cac-b89a8a47f004-kube-api-access-zpkbd\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967294 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967305 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnw84\" (UniqueName: \"kubernetes.io/projected/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-kube-api-access-dnw84\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967314 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.967322 4853 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.971979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b59697c4-2frrp" event={"ID":"5a4daab2-d15b-4492-9eea-05a2f6b753ef","Type":"ContainerDied","Data":"13fe258265ef315905c6a6d3db3461379141e7b6166ccca2a2182f6017bab6cc"} Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.972063 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b59697c4-2frrp" Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.977387 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4z67v" event={"ID":"9358cca5-2c9a-4ada-b9df-58fc71aa8fed","Type":"ContainerStarted","Data":"1f8f5e72bb8e6ccc341f1664c7adbfe19c662034f27081da87e21a376d1c1992"} Nov 22 07:53:58 crc kubenswrapper[4853]: I1122 07:53:58.991060 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.000087 4853 scope.go:117] "RemoveContainer" containerID="303907ed9d345885bb4cacf1ce533287bbc02acd9d0628cee6d8b5726c9dceee" Nov 22 07:53:59 crc kubenswrapper[4853]: E1122 07:53:59.000333 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" podUID="35498c08-898b-477d-88eb-3cf82e3696e7" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.007833 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.009116 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.021086 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.028489 4853 scope.go:117] "RemoveContainer" containerID="77895d132867c7e5e6a8436ef2372ee3f5927332df11fad5474e7068d2d9768e" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.046825 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data" (OuterVolumeSpecName: "config-data") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.046980 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data" (OuterVolumeSpecName: "config-data") pod "4194e8cf-31be-421c-9cac-b89a8a47f004" (UID: "4194e8cf-31be-421c-9cac-b89a8a47f004"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.057452 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.060280 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a4daab2-d15b-4492-9eea-05a2f6b753ef" (UID: "5a4daab2-d15b-4492-9eea-05a2f6b753ef"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069040 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069081 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069094 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069107 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4194e8cf-31be-421c-9cac-b89a8a47f004-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069119 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069132 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069140 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.069150 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a4daab2-d15b-4492-9eea-05a2f6b753ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.072840 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data" (OuterVolumeSpecName: "config-data") pod "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" (UID: "8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.170410 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.310208 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.332297 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-94775ccf-w92qr"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.350096 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.362803 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-57b59697c4-2frrp"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.374060 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.393380 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-846454b756-2r7vp"] Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.762736 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" path="/var/lib/kubelet/pods/4194e8cf-31be-421c-9cac-b89a8a47f004/volumes" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.763505 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" path="/var/lib/kubelet/pods/5a4daab2-d15b-4492-9eea-05a2f6b753ef/volumes" Nov 22 07:53:59 crc kubenswrapper[4853]: I1122 07:53:59.764166 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" path="/var/lib/kubelet/pods/8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a/volumes" Nov 22 07:54:00 crc kubenswrapper[4853]: I1122 07:54:00.748429 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:54:00 crc kubenswrapper[4853]: E1122 07:54:00.749071 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:54:06 crc kubenswrapper[4853]: I1122 07:54:06.355944 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:54:06 crc kubenswrapper[4853]: I1122 07:54:06.502500 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 07:54:13 crc kubenswrapper[4853]: I1122 07:54:13.748622 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:54:13 crc kubenswrapper[4853]: E1122 07:54:13.749442 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:54:21 crc kubenswrapper[4853]: I1122 07:54:21.077912 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:54:21 crc kubenswrapper[4853]: I1122 07:54:21.080344 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 07:54:21 crc kubenswrapper[4853]: I1122 07:54:21.958882 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" event={"ID":"35498c08-898b-477d-88eb-3cf82e3696e7","Type":"ContainerStarted","Data":"6049b542c82717a5526bbd869effd2108bf992b78a3837e0c4565afafcc57030"} Nov 22 07:54:21 crc kubenswrapper[4853]: I1122 07:54:21.962156 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4z67v" event={"ID":"9358cca5-2c9a-4ada-b9df-58fc71aa8fed","Type":"ContainerStarted","Data":"2425633fa17944a5e7544c55faaf263fcb0cc2d659672a869344cc36058c1ef2"} Nov 22 07:54:21 crc kubenswrapper[4853]: I1122 07:54:21.984054 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" podStartSLOduration=6.864146539 podStartE2EDuration="1m15.984035101s" podCreationTimestamp="2025-11-22 07:53:06 +0000 UTC" firstStartedPulling="2025-11-22 07:53:11.956991842 +0000 UTC m=+2590.797614468" lastFinishedPulling="2025-11-22 07:54:21.076880404 +0000 UTC m=+2659.917503030" observedRunningTime="2025-11-22 07:54:21.979256383 +0000 UTC m=+2660.819879009" watchObservedRunningTime="2025-11-22 07:54:21.984035101 +0000 UTC m=+2660.824657727" Nov 22 07:54:22 crc kubenswrapper[4853]: I1122 07:54:22.006885 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-4z67v" podStartSLOduration=19.76870663 podStartE2EDuration="42.006864095s" podCreationTimestamp="2025-11-22 07:53:40 +0000 UTC" firstStartedPulling="2025-11-22 07:53:58.836905011 +0000 UTC m=+2637.677527637" lastFinishedPulling="2025-11-22 07:54:21.075062476 +0000 UTC m=+2659.915685102" observedRunningTime="2025-11-22 07:54:21.99587357 +0000 UTC m=+2660.836496206" watchObservedRunningTime="2025-11-22 07:54:22.006864095 +0000 UTC m=+2660.847486721" Nov 22 07:54:25 crc kubenswrapper[4853]: I1122 07:54:25.022395 4853 generic.go:334] "Generic (PLEG): container finished" podID="9358cca5-2c9a-4ada-b9df-58fc71aa8fed" containerID="2425633fa17944a5e7544c55faaf263fcb0cc2d659672a869344cc36058c1ef2" exitCode=0 Nov 22 07:54:25 crc kubenswrapper[4853]: I1122 07:54:25.022797 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4z67v" event={"ID":"9358cca5-2c9a-4ada-b9df-58fc71aa8fed","Type":"ContainerDied","Data":"2425633fa17944a5e7544c55faaf263fcb0cc2d659672a869344cc36058c1ef2"} Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.576123 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4z67v" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.627224 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts\") pod \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.627283 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle\") pod \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.627459 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5wxc\" (UniqueName: \"kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc\") pod \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.627561 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data\") pod \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\" (UID: \"9358cca5-2c9a-4ada-b9df-58fc71aa8fed\") " Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.635975 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts" (OuterVolumeSpecName: "scripts") pod "9358cca5-2c9a-4ada-b9df-58fc71aa8fed" (UID: "9358cca5-2c9a-4ada-b9df-58fc71aa8fed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.636096 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc" (OuterVolumeSpecName: "kube-api-access-q5wxc") pod "9358cca5-2c9a-4ada-b9df-58fc71aa8fed" (UID: "9358cca5-2c9a-4ada-b9df-58fc71aa8fed"). InnerVolumeSpecName "kube-api-access-q5wxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.667730 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9358cca5-2c9a-4ada-b9df-58fc71aa8fed" (UID: "9358cca5-2c9a-4ada-b9df-58fc71aa8fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.683695 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data" (OuterVolumeSpecName: "config-data") pod "9358cca5-2c9a-4ada-b9df-58fc71aa8fed" (UID: "9358cca5-2c9a-4ada-b9df-58fc71aa8fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.730128 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.730172 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.730188 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5wxc\" (UniqueName: \"kubernetes.io/projected/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-kube-api-access-q5wxc\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:26 crc kubenswrapper[4853]: I1122 07:54:26.730201 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358cca5-2c9a-4ada-b9df-58fc71aa8fed-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:27 crc kubenswrapper[4853]: I1122 07:54:27.047746 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-4z67v" event={"ID":"9358cca5-2c9a-4ada-b9df-58fc71aa8fed","Type":"ContainerDied","Data":"1f8f5e72bb8e6ccc341f1664c7adbfe19c662034f27081da87e21a376d1c1992"} Nov 22 07:54:27 crc kubenswrapper[4853]: I1122 07:54:27.047809 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f8f5e72bb8e6ccc341f1664c7adbfe19c662034f27081da87e21a376d1c1992" Nov 22 07:54:27 crc kubenswrapper[4853]: I1122 07:54:27.047812 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-4z67v" Nov 22 07:54:27 crc kubenswrapper[4853]: I1122 07:54:27.749552 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:54:27 crc kubenswrapper[4853]: E1122 07:54:27.750577 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:54:30 crc kubenswrapper[4853]: I1122 07:54:30.501348 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:30 crc kubenswrapper[4853]: I1122 07:54:30.502388 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-notifier" containerID="cri-o://00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a" gracePeriod=30 Nov 22 07:54:30 crc kubenswrapper[4853]: I1122 07:54:30.502431 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-evaluator" containerID="cri-o://119076ac656be12ef9cbd92c02245e92909b2fd32c67e9bb318a59e7449657e3" gracePeriod=30 Nov 22 07:54:30 crc kubenswrapper[4853]: I1122 07:54:30.502387 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-listener" containerID="cri-o://1c25e8cabcdd6fd992c03e3b1b652252a0ce167545fa9b8be201b4fc99f726dd" gracePeriod=30 Nov 22 07:54:30 crc kubenswrapper[4853]: I1122 07:54:30.502266 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-api" containerID="cri-o://73e660bb306fc1c79ba24154aac45e062fa34e65e8ff6fe2c6d7d8f494f7ecaf" gracePeriod=30 Nov 22 07:54:31 crc kubenswrapper[4853]: I1122 07:54:31.094780 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerID="119076ac656be12ef9cbd92c02245e92909b2fd32c67e9bb318a59e7449657e3" exitCode=0 Nov 22 07:54:31 crc kubenswrapper[4853]: I1122 07:54:31.095223 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerID="73e660bb306fc1c79ba24154aac45e062fa34e65e8ff6fe2c6d7d8f494f7ecaf" exitCode=0 Nov 22 07:54:31 crc kubenswrapper[4853]: I1122 07:54:31.094876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerDied","Data":"119076ac656be12ef9cbd92c02245e92909b2fd32c67e9bb318a59e7449657e3"} Nov 22 07:54:31 crc kubenswrapper[4853]: I1122 07:54:31.095334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerDied","Data":"73e660bb306fc1c79ba24154aac45e062fa34e65e8ff6fe2c6d7d8f494f7ecaf"} Nov 22 07:54:34 crc kubenswrapper[4853]: I1122 07:54:34.138402 4853 generic.go:334] "Generic (PLEG): container finished" podID="35498c08-898b-477d-88eb-3cf82e3696e7" containerID="6049b542c82717a5526bbd869effd2108bf992b78a3837e0c4565afafcc57030" exitCode=0 Nov 22 07:54:34 crc kubenswrapper[4853]: I1122 07:54:34.138499 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" event={"ID":"35498c08-898b-477d-88eb-3cf82e3696e7","Type":"ContainerDied","Data":"6049b542c82717a5526bbd869effd2108bf992b78a3837e0c4565afafcc57030"} Nov 22 07:54:35 crc kubenswrapper[4853]: E1122 07:54:35.105279 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7cd6aaf_637c_4a17_b1ff_fe51acdd2641.slice/crio-conmon-00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7cd6aaf_637c_4a17_b1ff_fe51acdd2641.slice/crio-00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.175415 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerID="1c25e8cabcdd6fd992c03e3b1b652252a0ce167545fa9b8be201b4fc99f726dd" exitCode=0 Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.175454 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerID="00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a" exitCode=0 Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.175483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerDied","Data":"1c25e8cabcdd6fd992c03e3b1b652252a0ce167545fa9b8be201b4fc99f726dd"} Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.175525 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerDied","Data":"00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a"} Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.279391 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.367248 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.367768 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.367925 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwd8b\" (UniqueName: \"kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.367975 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.368011 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.368094 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs\") pod \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\" (UID: \"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.375930 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b" (OuterVolumeSpecName: "kube-api-access-hwd8b") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "kube-api-access-hwd8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.376240 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts" (OuterVolumeSpecName: "scripts") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.464094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.471598 4853 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.471625 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwd8b\" (UniqueName: \"kubernetes.io/projected/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-kube-api-access-hwd8b\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.471638 4853 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.515348 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.536052 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.574308 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.574354 4853 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.582331 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data" (OuterVolumeSpecName: "config-data") pod "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" (UID: "c7cd6aaf-637c-4a17-b1ff-fe51acdd2641"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.676618 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.688208 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.778231 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl8pt\" (UniqueName: \"kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt\") pod \"35498c08-898b-477d-88eb-3cf82e3696e7\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.778345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle\") pod \"35498c08-898b-477d-88eb-3cf82e3696e7\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.778411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key\") pod \"35498c08-898b-477d-88eb-3cf82e3696e7\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.778711 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory\") pod \"35498c08-898b-477d-88eb-3cf82e3696e7\" (UID: \"35498c08-898b-477d-88eb-3cf82e3696e7\") " Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.784159 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "35498c08-898b-477d-88eb-3cf82e3696e7" (UID: "35498c08-898b-477d-88eb-3cf82e3696e7"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.785104 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt" (OuterVolumeSpecName: "kube-api-access-tl8pt") pod "35498c08-898b-477d-88eb-3cf82e3696e7" (UID: "35498c08-898b-477d-88eb-3cf82e3696e7"). InnerVolumeSpecName "kube-api-access-tl8pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.818333 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory" (OuterVolumeSpecName: "inventory") pod "35498c08-898b-477d-88eb-3cf82e3696e7" (UID: "35498c08-898b-477d-88eb-3cf82e3696e7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.820857 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35498c08-898b-477d-88eb-3cf82e3696e7" (UID: "35498c08-898b-477d-88eb-3cf82e3696e7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.885599 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.885632 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl8pt\" (UniqueName: \"kubernetes.io/projected/35498c08-898b-477d-88eb-3cf82e3696e7-kube-api-access-tl8pt\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.885646 4853 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:35 crc kubenswrapper[4853]: I1122 07:54:35.885656 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35498c08-898b-477d-88eb-3cf82e3696e7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.187550 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.187547 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx" event={"ID":"35498c08-898b-477d-88eb-3cf82e3696e7","Type":"ContainerDied","Data":"0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b"} Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.187677 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de0a70301eea1110024e54340c85fdadab0848608d7b34de03c6e5f345d151b" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.190543 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"c7cd6aaf-637c-4a17-b1ff-fe51acdd2641","Type":"ContainerDied","Data":"e5afdd11d36a1121a4de580b9e9dc191d1f0e290957fb13646fd8be338abba19"} Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.190601 4853 scope.go:117] "RemoveContainer" containerID="1c25e8cabcdd6fd992c03e3b1b652252a0ce167545fa9b8be201b4fc99f726dd" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.190659 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.246291 4853 scope.go:117] "RemoveContainer" containerID="00a8ee6e0ebba9439c54d670b6ee4fdd78d818cd734291e2bc74b1d5f9e1919a" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.251661 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.271543 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.294736 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295248 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-notifier" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295259 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-notifier" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295273 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9358cca5-2c9a-4ada-b9df-58fc71aa8fed" containerName="aodh-db-sync" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295278 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9358cca5-2c9a-4ada-b9df-58fc71aa8fed" containerName="aodh-db-sync" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295299 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-evaluator" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295304 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-evaluator" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295321 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295329 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295347 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295354 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295371 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-listener" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295377 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-listener" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295390 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35498c08-898b-477d-88eb-3cf82e3696e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295396 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="35498c08-898b-477d-88eb-3cf82e3696e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295407 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295413 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" Nov 22 07:54:36 crc kubenswrapper[4853]: E1122 07:54:36.295421 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-api" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295426 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-api" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295638 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="35498c08-898b-477d-88eb-3cf82e3696e7" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295658 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ecb0aa1-3b8f-4b40-88ce-4bf61cb8ef3a" containerName="heat-cfnapi" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295666 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9358cca5-2c9a-4ada-b9df-58fc71aa8fed" containerName="aodh-db-sync" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295676 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a4daab2-d15b-4492-9eea-05a2f6b753ef" containerName="heat-api" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295684 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-listener" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295697 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-evaluator" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295706 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-notifier" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295719 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" containerName="aodh-api" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.295729 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="4194e8cf-31be-421c-9cac-b89a8a47f004" containerName="heat-engine" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.297896 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.306208 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8"] Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.307947 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.308062 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-jm7rg" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.308180 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.308535 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.311035 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.311125 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.311350 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.311820 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.312151 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.312162 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.317964 4853 scope.go:117] "RemoveContainer" containerID="119076ac656be12ef9cbd92c02245e92909b2fd32c67e9bb318a59e7449657e3" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.318141 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.349470 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8"] Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.364832 4853 scope.go:117] "RemoveContainer" containerID="73e660bb306fc1c79ba24154aac45e062fa34e65e8ff6fe2c6d7d8f494f7ecaf" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.398384 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.398492 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lcvn\" (UniqueName: \"kubernetes.io/projected/01a252c2-19bf-4c3d-83d6-685e0c49606d-kube-api-access-9lcvn\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.398532 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-public-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.398567 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.399499 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9bdh\" (UniqueName: \"kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.399609 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-internal-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.399703 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-config-data\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.399826 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.399931 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-scripts\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9bdh\" (UniqueName: \"kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502482 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-internal-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502521 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-config-data\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502559 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502597 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-scripts\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502655 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502704 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lcvn\" (UniqueName: \"kubernetes.io/projected/01a252c2-19bf-4c3d-83d6-685e0c49606d-kube-api-access-9lcvn\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502722 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-public-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.502840 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.509403 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-scripts\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.511631 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.512085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.512453 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.514320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-public-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.514812 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-config-data\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.517740 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/01a252c2-19bf-4c3d-83d6-685e0c49606d-internal-tls-certs\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.528801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9bdh\" (UniqueName: \"kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hh6z8\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.529393 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lcvn\" (UniqueName: \"kubernetes.io/projected/01a252c2-19bf-4c3d-83d6-685e0c49606d-kube-api-access-9lcvn\") pod \"aodh-0\" (UID: \"01a252c2-19bf-4c3d-83d6-685e0c49606d\") " pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.638612 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 07:54:36 crc kubenswrapper[4853]: I1122 07:54:36.656044 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:37 crc kubenswrapper[4853]: W1122 07:54:37.200381 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a252c2_19bf_4c3d_83d6_685e0c49606d.slice/crio-d92ce32d63d7cc46a5476d4a0eee5687ef7bd40d7063a5a0df640aa652de763d WatchSource:0}: Error finding container d92ce32d63d7cc46a5476d4a0eee5687ef7bd40d7063a5a0df640aa652de763d: Status 404 returned error can't find the container with id d92ce32d63d7cc46a5476d4a0eee5687ef7bd40d7063a5a0df640aa652de763d Nov 22 07:54:37 crc kubenswrapper[4853]: I1122 07:54:37.208474 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 07:54:37 crc kubenswrapper[4853]: I1122 07:54:37.319227 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8"] Nov 22 07:54:37 crc kubenswrapper[4853]: I1122 07:54:37.763856 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cd6aaf-637c-4a17-b1ff-fe51acdd2641" path="/var/lib/kubelet/pods/c7cd6aaf-637c-4a17-b1ff-fe51acdd2641/volumes" Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.221936 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a252c2-19bf-4c3d-83d6-685e0c49606d","Type":"ContainerStarted","Data":"91f0e376f6f8e64972b561b9d1b32099444c440dc0046eacbc338a32322120dd"} Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.222370 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a252c2-19bf-4c3d-83d6-685e0c49606d","Type":"ContainerStarted","Data":"d92ce32d63d7cc46a5476d4a0eee5687ef7bd40d7063a5a0df640aa652de763d"} Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.224196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" event={"ID":"34172255-8ec0-4d57-97ab-0ec632e7ae64","Type":"ContainerStarted","Data":"6ba6121b7c124c894b01ed80e15082c451311a1f4f9aa7580551bfe308b03c66"} Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.224256 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" event={"ID":"34172255-8ec0-4d57-97ab-0ec632e7ae64","Type":"ContainerStarted","Data":"a433c5959cec8acabca2bf634bf609d6b195df030f141e99d12382dda418c3d3"} Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.276664 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" podStartSLOduration=1.695447911 podStartE2EDuration="2.276642996s" podCreationTimestamp="2025-11-22 07:54:36 +0000 UTC" firstStartedPulling="2025-11-22 07:54:37.321668005 +0000 UTC m=+2676.162290631" lastFinishedPulling="2025-11-22 07:54:37.90286309 +0000 UTC m=+2676.743485716" observedRunningTime="2025-11-22 07:54:38.243938617 +0000 UTC m=+2677.084561253" watchObservedRunningTime="2025-11-22 07:54:38.276642996 +0000 UTC m=+2677.117265642" Nov 22 07:54:38 crc kubenswrapper[4853]: I1122 07:54:38.748314 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:54:38 crc kubenswrapper[4853]: E1122 07:54:38.748702 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:54:42 crc kubenswrapper[4853]: I1122 07:54:42.271798 4853 generic.go:334] "Generic (PLEG): container finished" podID="34172255-8ec0-4d57-97ab-0ec632e7ae64" containerID="6ba6121b7c124c894b01ed80e15082c451311a1f4f9aa7580551bfe308b03c66" exitCode=0 Nov 22 07:54:42 crc kubenswrapper[4853]: I1122 07:54:42.271861 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" event={"ID":"34172255-8ec0-4d57-97ab-0ec632e7ae64","Type":"ContainerDied","Data":"6ba6121b7c124c894b01ed80e15082c451311a1f4f9aa7580551bfe308b03c66"} Nov 22 07:54:43 crc kubenswrapper[4853]: I1122 07:54:43.287007 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a252c2-19bf-4c3d-83d6-685e0c49606d","Type":"ContainerStarted","Data":"6a58e889778de8715585b273bb62cb1903ec2a8c16946aed51820c417b4684c2"} Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.230293 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.301469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" event={"ID":"34172255-8ec0-4d57-97ab-0ec632e7ae64","Type":"ContainerDied","Data":"a433c5959cec8acabca2bf634bf609d6b195df030f141e99d12382dda418c3d3"} Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.302425 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a433c5959cec8acabca2bf634bf609d6b195df030f141e99d12382dda418c3d3" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.301581 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hh6z8" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.338942 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key\") pod \"34172255-8ec0-4d57-97ab-0ec632e7ae64\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.339191 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9bdh\" (UniqueName: \"kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh\") pod \"34172255-8ec0-4d57-97ab-0ec632e7ae64\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.339364 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory\") pod \"34172255-8ec0-4d57-97ab-0ec632e7ae64\" (UID: \"34172255-8ec0-4d57-97ab-0ec632e7ae64\") " Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.353808 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh" (OuterVolumeSpecName: "kube-api-access-l9bdh") pod "34172255-8ec0-4d57-97ab-0ec632e7ae64" (UID: "34172255-8ec0-4d57-97ab-0ec632e7ae64"). InnerVolumeSpecName "kube-api-access-l9bdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.374353 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq"] Nov 22 07:54:44 crc kubenswrapper[4853]: E1122 07:54:44.378341 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34172255-8ec0-4d57-97ab-0ec632e7ae64" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.378374 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="34172255-8ec0-4d57-97ab-0ec632e7ae64" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.378608 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="34172255-8ec0-4d57-97ab-0ec632e7ae64" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.379435 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.400507 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "34172255-8ec0-4d57-97ab-0ec632e7ae64" (UID: "34172255-8ec0-4d57-97ab-0ec632e7ae64"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.406841 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq"] Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.416929 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory" (OuterVolumeSpecName: "inventory") pod "34172255-8ec0-4d57-97ab-0ec632e7ae64" (UID: "34172255-8ec0-4d57-97ab-0ec632e7ae64"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442265 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442490 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp5l9\" (UniqueName: \"kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442515 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442565 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442693 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442717 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9bdh\" (UniqueName: \"kubernetes.io/projected/34172255-8ec0-4d57-97ab-0ec632e7ae64-kube-api-access-l9bdh\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.442729 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34172255-8ec0-4d57-97ab-0ec632e7ae64-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.547007 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp5l9\" (UniqueName: \"kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.547099 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.547391 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.547574 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.552438 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.556341 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.561780 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.570510 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp5l9\" (UniqueName: \"kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:44 crc kubenswrapper[4853]: I1122 07:54:44.590704 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:54:45 crc kubenswrapper[4853]: I1122 07:54:45.197406 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq"] Nov 22 07:54:46 crc kubenswrapper[4853]: I1122 07:54:45.314470 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" event={"ID":"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a","Type":"ContainerStarted","Data":"72046e3a1578750148c1107dc143ed80fb9382f6585d0bb2b800f69ea0e9b4fa"} Nov 22 07:54:46 crc kubenswrapper[4853]: I1122 07:54:45.317216 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a252c2-19bf-4c3d-83d6-685e0c49606d","Type":"ContainerStarted","Data":"e1e6f9bc0c0e56097e556cc6eb170af9ddef2fe2bf8051c8684180cb6527146e"} Nov 22 07:54:47 crc kubenswrapper[4853]: I1122 07:54:47.354628 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" event={"ID":"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a","Type":"ContainerStarted","Data":"b3550429ae564b456e2917cfdf157241636d1d641b31af4c32bb1b3df4482c1d"} Nov 22 07:54:47 crc kubenswrapper[4853]: I1122 07:54:47.367916 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a252c2-19bf-4c3d-83d6-685e0c49606d","Type":"ContainerStarted","Data":"74be184da14ced7ee3a112112d7ee551590bd4077aea7feb323c9acc27e62d58"} Nov 22 07:54:47 crc kubenswrapper[4853]: I1122 07:54:47.387635 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" podStartSLOduration=2.130822321 podStartE2EDuration="3.387613995s" podCreationTimestamp="2025-11-22 07:54:44 +0000 UTC" firstStartedPulling="2025-11-22 07:54:45.204820857 +0000 UTC m=+2684.045443493" lastFinishedPulling="2025-11-22 07:54:46.461612541 +0000 UTC m=+2685.302235167" observedRunningTime="2025-11-22 07:54:47.37926982 +0000 UTC m=+2686.219892446" watchObservedRunningTime="2025-11-22 07:54:47.387613995 +0000 UTC m=+2686.228236621" Nov 22 07:54:47 crc kubenswrapper[4853]: I1122 07:54:47.412163 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.158801842 podStartE2EDuration="11.412143213s" podCreationTimestamp="2025-11-22 07:54:36 +0000 UTC" firstStartedPulling="2025-11-22 07:54:37.208930748 +0000 UTC m=+2676.049553374" lastFinishedPulling="2025-11-22 07:54:46.462272109 +0000 UTC m=+2685.302894745" observedRunningTime="2025-11-22 07:54:47.407618001 +0000 UTC m=+2686.248240637" watchObservedRunningTime="2025-11-22 07:54:47.412143213 +0000 UTC m=+2686.252765839" Nov 22 07:54:50 crc kubenswrapper[4853]: I1122 07:54:50.748133 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:54:50 crc kubenswrapper[4853]: E1122 07:54:50.749134 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:54:57 crc kubenswrapper[4853]: I1122 07:54:57.863366 4853 scope.go:117] "RemoveContainer" containerID="151cef4835ee16ee65aa991da3fb752907af13b30ebe8f99644e65ab26befcf2" Nov 22 07:54:58 crc kubenswrapper[4853]: I1122 07:54:58.003853 4853 scope.go:117] "RemoveContainer" containerID="5154f2248032d3b204066e6d3d4b29c26f050e50ddfa355693e793433a3a28e6" Nov 22 07:54:58 crc kubenswrapper[4853]: I1122 07:54:58.265845 4853 scope.go:117] "RemoveContainer" containerID="6687c913de7d55c949e6989f4e689e8419afbc7e2e9a9c1870d27fdcc48c5932" Nov 22 07:54:58 crc kubenswrapper[4853]: I1122 07:54:58.370471 4853 scope.go:117] "RemoveContainer" containerID="ace2b700e3d3a46d1d7ea675ab99d8a413032344f6da2811f7ecc40159e7e333" Nov 22 07:54:58 crc kubenswrapper[4853]: I1122 07:54:58.520249 4853 scope.go:117] "RemoveContainer" containerID="d918d370ed863051395cd254e6a85946d7ea846df44e4c12a1bb09910d401cf5" Nov 22 07:54:58 crc kubenswrapper[4853]: I1122 07:54:58.563509 4853 scope.go:117] "RemoveContainer" containerID="2a0ae2704879a3a68f2954be6b02bdf86ba699bf28df386ae8933fb182211f48" Nov 22 07:55:02 crc kubenswrapper[4853]: I1122 07:55:02.747900 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:55:02 crc kubenswrapper[4853]: E1122 07:55:02.748814 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:55:04 crc kubenswrapper[4853]: I1122 07:55:04.066697 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-twqq5"] Nov 22 07:55:04 crc kubenswrapper[4853]: I1122 07:55:04.081542 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-twqq5"] Nov 22 07:55:05 crc kubenswrapper[4853]: I1122 07:55:05.766046 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2dc7c1e-0083-4eab-80f2-eec435f5c97a" path="/var/lib/kubelet/pods/e2dc7c1e-0083-4eab-80f2-eec435f5c97a/volumes" Nov 22 07:55:17 crc kubenswrapper[4853]: I1122 07:55:17.748288 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:55:17 crc kubenswrapper[4853]: E1122 07:55:17.750765 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:55:25 crc kubenswrapper[4853]: I1122 07:55:25.039294 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mnxvk"] Nov 22 07:55:25 crc kubenswrapper[4853]: I1122 07:55:25.049541 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mnxvk"] Nov 22 07:55:25 crc kubenswrapper[4853]: I1122 07:55:25.775104 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62df165-8b5f-48a0-823f-91a3517b8082" path="/var/lib/kubelet/pods/a62df165-8b5f-48a0-823f-91a3517b8082/volumes" Nov 22 07:55:32 crc kubenswrapper[4853]: I1122 07:55:32.748224 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:55:32 crc kubenswrapper[4853]: E1122 07:55:32.749045 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:55:47 crc kubenswrapper[4853]: I1122 07:55:47.747866 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:55:47 crc kubenswrapper[4853]: E1122 07:55:47.749011 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:55:58 crc kubenswrapper[4853]: I1122 07:55:58.834006 4853 scope.go:117] "RemoveContainer" containerID="20af5ec328f6909943bbc0870f254b43a734f0691be83697769a97b8f6d3ddd2" Nov 22 07:55:58 crc kubenswrapper[4853]: I1122 07:55:58.859960 4853 scope.go:117] "RemoveContainer" containerID="9802cd6c826da0ea11aa1ae79ac99b721e6b1b46faba4d37eab52a33a3957907" Nov 22 07:55:58 crc kubenswrapper[4853]: I1122 07:55:58.920313 4853 scope.go:117] "RemoveContainer" containerID="8e402bba5063452336a420c74fd7026f9c23745dbe2a7f14a8f11f4f18d9b651" Nov 22 07:55:58 crc kubenswrapper[4853]: I1122 07:55:58.950912 4853 scope.go:117] "RemoveContainer" containerID="555ddf966aa4207870cf3c77619e1ded7dc1792697c2a7dce08ae0fc0db92841" Nov 22 07:56:00 crc kubenswrapper[4853]: I1122 07:56:00.748549 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:56:00 crc kubenswrapper[4853]: E1122 07:56:00.749212 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.122807 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.126950 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.167383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.167881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.167957 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbj2\" (UniqueName: \"kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.211833 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.269410 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.269707 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbj2\" (UniqueName: \"kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.269966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.269987 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.270241 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.299983 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbj2\" (UniqueName: \"kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2\") pod \"redhat-marketplace-9pcqw\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.457890 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:07 crc kubenswrapper[4853]: I1122 07:56:07.988360 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:07 crc kubenswrapper[4853]: W1122 07:56:07.990385 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f224b03_de77_4355_8d54_0d13344ea5cb.slice/crio-4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08 WatchSource:0}: Error finding container 4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08: Status 404 returned error can't find the container with id 4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08 Nov 22 07:56:08 crc kubenswrapper[4853]: I1122 07:56:08.416953 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerStarted","Data":"b9a02ec9c71e36b5a5fea60e2b71703db3d0a7cbde2e95361087fd3570d5b017"} Nov 22 07:56:08 crc kubenswrapper[4853]: I1122 07:56:08.416996 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerStarted","Data":"4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08"} Nov 22 07:56:09 crc kubenswrapper[4853]: I1122 07:56:09.429350 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerID="b9a02ec9c71e36b5a5fea60e2b71703db3d0a7cbde2e95361087fd3570d5b017" exitCode=0 Nov 22 07:56:09 crc kubenswrapper[4853]: I1122 07:56:09.429428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerDied","Data":"b9a02ec9c71e36b5a5fea60e2b71703db3d0a7cbde2e95361087fd3570d5b017"} Nov 22 07:56:09 crc kubenswrapper[4853]: I1122 07:56:09.432063 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:56:11 crc kubenswrapper[4853]: I1122 07:56:11.455616 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerStarted","Data":"220bf2c31587e11941cd379228418c10aad5a78d087483097057f4e2beabf8e7"} Nov 22 07:56:12 crc kubenswrapper[4853]: I1122 07:56:12.472052 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerID="220bf2c31587e11941cd379228418c10aad5a78d087483097057f4e2beabf8e7" exitCode=0 Nov 22 07:56:12 crc kubenswrapper[4853]: I1122 07:56:12.472113 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerDied","Data":"220bf2c31587e11941cd379228418c10aad5a78d087483097057f4e2beabf8e7"} Nov 22 07:56:13 crc kubenswrapper[4853]: I1122 07:56:13.489472 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerStarted","Data":"fb916ec7786ebd4be526a731fceb782c5e0a6aef7a17c519b0e436df98f4bc38"} Nov 22 07:56:13 crc kubenswrapper[4853]: I1122 07:56:13.511967 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9pcqw" podStartSLOduration=2.97327306 podStartE2EDuration="6.511946443s" podCreationTimestamp="2025-11-22 07:56:07 +0000 UTC" firstStartedPulling="2025-11-22 07:56:09.431796461 +0000 UTC m=+2768.272419087" lastFinishedPulling="2025-11-22 07:56:12.970469834 +0000 UTC m=+2771.811092470" observedRunningTime="2025-11-22 07:56:13.50888074 +0000 UTC m=+2772.349503386" watchObservedRunningTime="2025-11-22 07:56:13.511946443 +0000 UTC m=+2772.352569069" Nov 22 07:56:13 crc kubenswrapper[4853]: I1122 07:56:13.748864 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:56:13 crc kubenswrapper[4853]: E1122 07:56:13.749556 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:56:17 crc kubenswrapper[4853]: I1122 07:56:17.458189 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:17 crc kubenswrapper[4853]: I1122 07:56:17.460072 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:17 crc kubenswrapper[4853]: I1122 07:56:17.520129 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:24 crc kubenswrapper[4853]: I1122 07:56:24.747812 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:56:24 crc kubenswrapper[4853]: E1122 07:56:24.748589 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 07:56:27 crc kubenswrapper[4853]: I1122 07:56:27.539009 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:27 crc kubenswrapper[4853]: I1122 07:56:27.605646 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:27 crc kubenswrapper[4853]: I1122 07:56:27.655247 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9pcqw" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="registry-server" containerID="cri-o://fb916ec7786ebd4be526a731fceb782c5e0a6aef7a17c519b0e436df98f4bc38" gracePeriod=2 Nov 22 07:56:29 crc kubenswrapper[4853]: I1122 07:56:29.680013 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerID="fb916ec7786ebd4be526a731fceb782c5e0a6aef7a17c519b0e436df98f4bc38" exitCode=0 Nov 22 07:56:29 crc kubenswrapper[4853]: I1122 07:56:29.680054 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerDied","Data":"fb916ec7786ebd4be526a731fceb782c5e0a6aef7a17c519b0e436df98f4bc38"} Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.695163 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9pcqw" event={"ID":"7f224b03-de77-4355-8d54-0d13344ea5cb","Type":"ContainerDied","Data":"4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08"} Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.695571 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f37c6902acaf60bc25fdc919446af7fb8166b3d9197dab84c7849532171ac08" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.752400 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.861628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkbj2\" (UniqueName: \"kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2\") pod \"7f224b03-de77-4355-8d54-0d13344ea5cb\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.861784 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content\") pod \"7f224b03-de77-4355-8d54-0d13344ea5cb\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.861896 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities\") pod \"7f224b03-de77-4355-8d54-0d13344ea5cb\" (UID: \"7f224b03-de77-4355-8d54-0d13344ea5cb\") " Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.862788 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities" (OuterVolumeSpecName: "utilities") pod "7f224b03-de77-4355-8d54-0d13344ea5cb" (UID: "7f224b03-de77-4355-8d54-0d13344ea5cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.871537 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2" (OuterVolumeSpecName: "kube-api-access-zkbj2") pod "7f224b03-de77-4355-8d54-0d13344ea5cb" (UID: "7f224b03-de77-4355-8d54-0d13344ea5cb"). InnerVolumeSpecName "kube-api-access-zkbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.880059 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f224b03-de77-4355-8d54-0d13344ea5cb" (UID: "7f224b03-de77-4355-8d54-0d13344ea5cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.964963 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkbj2\" (UniqueName: \"kubernetes.io/projected/7f224b03-de77-4355-8d54-0d13344ea5cb-kube-api-access-zkbj2\") on node \"crc\" DevicePath \"\"" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.965259 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:56:30 crc kubenswrapper[4853]: I1122 07:56:30.965270 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f224b03-de77-4355-8d54-0d13344ea5cb-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:56:31 crc kubenswrapper[4853]: I1122 07:56:31.708040 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9pcqw" Nov 22 07:56:31 crc kubenswrapper[4853]: I1122 07:56:31.765869 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:31 crc kubenswrapper[4853]: I1122 07:56:31.774430 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9pcqw"] Nov 22 07:56:33 crc kubenswrapper[4853]: I1122 07:56:33.762114 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" path="/var/lib/kubelet/pods/7f224b03-de77-4355-8d54-0d13344ea5cb/volumes" Nov 22 07:56:36 crc kubenswrapper[4853]: I1122 07:56:36.747794 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 07:56:37 crc kubenswrapper[4853]: I1122 07:56:37.776505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd"} Nov 22 07:56:52 crc kubenswrapper[4853]: I1122 07:56:52.100771 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qdbdm"] Nov 22 07:56:52 crc kubenswrapper[4853]: I1122 07:56:52.149889 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qdbdm"] Nov 22 07:56:53 crc kubenswrapper[4853]: I1122 07:56:53.766583 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1598c90-266c-4607-b491-e9927d76469c" path="/var/lib/kubelet/pods/f1598c90-266c-4607-b491-e9927d76469c/volumes" Nov 22 07:56:59 crc kubenswrapper[4853]: I1122 07:56:59.119062 4853 scope.go:117] "RemoveContainer" containerID="19817544de55f39096555895f24dbd6c2507c39adcdeef2f57827e5f888eeacd" Nov 22 07:56:59 crc kubenswrapper[4853]: I1122 07:56:59.153628 4853 scope.go:117] "RemoveContainer" containerID="12fd72d7205251492e634a8695f8737b21dea1378b41aed6d23d4c5fedf9533c" Nov 22 07:56:59 crc kubenswrapper[4853]: I1122 07:56:59.199047 4853 scope.go:117] "RemoveContainer" containerID="f7d5cd861b3b67a74a8876e94a1aecaeaf9677879dfc1bfa17d9f86ded1f579a" Nov 22 07:56:59 crc kubenswrapper[4853]: I1122 07:56:59.259456 4853 scope.go:117] "RemoveContainer" containerID="0b3c3ed175688b10da46a60441edc272d82d13538ec20d2af50756c305f74227" Nov 22 07:57:02 crc kubenswrapper[4853]: I1122 07:57:02.045853 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-s6dqt"] Nov 22 07:57:02 crc kubenswrapper[4853]: I1122 07:57:02.057338 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4ca2-account-create-twtvw"] Nov 22 07:57:02 crc kubenswrapper[4853]: I1122 07:57:02.070332 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-s6dqt"] Nov 22 07:57:02 crc kubenswrapper[4853]: I1122 07:57:02.081808 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4ca2-account-create-twtvw"] Nov 22 07:57:03 crc kubenswrapper[4853]: I1122 07:57:03.765393 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2beefde-5354-4376-8cf2-5f3bd9cde859" path="/var/lib/kubelet/pods/b2beefde-5354-4376-8cf2-5f3bd9cde859/volumes" Nov 22 07:57:03 crc kubenswrapper[4853]: I1122 07:57:03.766361 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0fa86b0-ac83-432d-884c-c906c2b47a12" path="/var/lib/kubelet/pods/e0fa86b0-ac83-432d-884c-c906c2b47a12/volumes" Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.033678 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-46aa-account-create-vvxzs"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.047635 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-fsblj"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.059225 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0ebe-account-create-9lcbc"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.068271 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-46aa-account-create-vvxzs"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.077004 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0ebe-account-create-9lcbc"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.094881 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-sn986"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.104512 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-fsblj"] Nov 22 07:57:12 crc kubenswrapper[4853]: I1122 07:57:12.113464 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-sn986"] Nov 22 07:57:13 crc kubenswrapper[4853]: I1122 07:57:13.762142 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10eb3c0c-487e-4c7c-b422-fc41587f2b3e" path="/var/lib/kubelet/pods/10eb3c0c-487e-4c7c-b422-fc41587f2b3e/volumes" Nov 22 07:57:13 crc kubenswrapper[4853]: I1122 07:57:13.763081 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7997d1-57a6-4a25-a55c-e56a641573e3" path="/var/lib/kubelet/pods/1a7997d1-57a6-4a25-a55c-e56a641573e3/volumes" Nov 22 07:57:13 crc kubenswrapper[4853]: I1122 07:57:13.763861 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267c0415-28fa-43de-a7e0-c64254b85fee" path="/var/lib/kubelet/pods/267c0415-28fa-43de-a7e0-c64254b85fee/volumes" Nov 22 07:57:13 crc kubenswrapper[4853]: I1122 07:57:13.764603 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74817a4c-27ab-46b7-8ec8-5663379dc5f8" path="/var/lib/kubelet/pods/74817a4c-27ab-46b7-8ec8-5663379dc5f8/volumes" Nov 22 07:57:56 crc kubenswrapper[4853]: I1122 07:57:56.051616 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-dzsj4"] Nov 22 07:57:56 crc kubenswrapper[4853]: I1122 07:57:56.062200 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-dzsj4"] Nov 22 07:57:57 crc kubenswrapper[4853]: I1122 07:57:57.884168 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289fadd4-7721-4d8e-b33e-35606c18eedb" path="/var/lib/kubelet/pods/289fadd4-7721-4d8e-b33e-35606c18eedb/volumes" Nov 22 07:57:59 crc kubenswrapper[4853]: I1122 07:57:59.392411 4853 scope.go:117] "RemoveContainer" containerID="b9be95a7708f8d0980296a0ab791c5dbfff5e3999bc3d58292b999c49ac4cbe7" Nov 22 07:57:59 crc kubenswrapper[4853]: I1122 07:57:59.699227 4853 scope.go:117] "RemoveContainer" containerID="b468857845241d2a97ac6d4a96ce7db29071c3d8dac09d53fe2f6aa71460f5dd" Nov 22 07:57:59 crc kubenswrapper[4853]: I1122 07:57:59.774006 4853 scope.go:117] "RemoveContainer" containerID="98fbd7ad3838c1218d32364d752993489d8cd741011e0960648e5f8b5cac6738" Nov 22 07:57:59 crc kubenswrapper[4853]: I1122 07:57:59.819245 4853 scope.go:117] "RemoveContainer" containerID="71ae5750a9bbcf0314c93aa2a6aeeac589c7931877e6a375aa06e511f19c6ec5" Nov 22 07:58:00 crc kubenswrapper[4853]: I1122 07:58:00.046317 4853 scope.go:117] "RemoveContainer" containerID="855251b0bb1e555239ae2e2a9f138ea4ee1ac41896cd63eea37e083431d83617" Nov 22 07:58:00 crc kubenswrapper[4853]: I1122 07:58:00.086516 4853 scope.go:117] "RemoveContainer" containerID="80eb8ff1e48c44f0475a6f267423257298028dc682231762a19f30d7b3f88196" Nov 22 07:58:00 crc kubenswrapper[4853]: I1122 07:58:00.150019 4853 scope.go:117] "RemoveContainer" containerID="3a2d15629ae8583586b2aa002e0ac69cf6fb75389e39085740b0f4e577b2f16e" Nov 22 07:58:00 crc kubenswrapper[4853]: I1122 07:58:00.245816 4853 scope.go:117] "RemoveContainer" containerID="3ea1c8cba2063368c899e188ed94d06adf47ec3a97c4c306cfc1629fe0b5bf25" Nov 22 07:58:00 crc kubenswrapper[4853]: I1122 07:58:00.290162 4853 scope.go:117] "RemoveContainer" containerID="b3825da8513fcf522abc07c633c3157bee2e3fc873e526dee20a07d81b83340e" Nov 22 07:58:03 crc kubenswrapper[4853]: I1122 07:58:03.042651 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nnfsq"] Nov 22 07:58:03 crc kubenswrapper[4853]: I1122 07:58:03.056954 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nnfsq"] Nov 22 07:58:03 crc kubenswrapper[4853]: I1122 07:58:03.762962 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d503fd-37f2-453c-aba9-5d2fb2c6aad0" path="/var/lib/kubelet/pods/29d503fd-37f2-453c-aba9-5d2fb2c6aad0/volumes" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.314340 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:58:12 crc kubenswrapper[4853]: E1122 07:58:12.315572 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="registry-server" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.315589 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="registry-server" Nov 22 07:58:12 crc kubenswrapper[4853]: E1122 07:58:12.315614 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="extract-content" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.315620 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="extract-content" Nov 22 07:58:12 crc kubenswrapper[4853]: E1122 07:58:12.315635 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="extract-utilities" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.315644 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="extract-utilities" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.316564 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f224b03-de77-4355-8d54-0d13344ea5cb" containerName="registry-server" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.342986 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.343220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.398662 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.398982 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkg4w\" (UniqueName: \"kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.399142 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.502005 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkg4w\" (UniqueName: \"kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.502118 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.502306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.502786 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.502896 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.536847 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkg4w\" (UniqueName: \"kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w\") pod \"redhat-operators-4dj4d\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:12 crc kubenswrapper[4853]: I1122 07:58:12.712944 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:58:13 crc kubenswrapper[4853]: I1122 07:58:13.254816 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:58:14 crc kubenswrapper[4853]: I1122 07:58:14.014183 4853 generic.go:334] "Generic (PLEG): container finished" podID="782dde53-293a-409a-9516-f6e8ef463be0" containerID="d4597517d975aba49a5fba9ea05fc588bb937259177cc24a2a5ac0d821ef174e" exitCode=0 Nov 22 07:58:14 crc kubenswrapper[4853]: I1122 07:58:14.014340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerDied","Data":"d4597517d975aba49a5fba9ea05fc588bb937259177cc24a2a5ac0d821ef174e"} Nov 22 07:58:14 crc kubenswrapper[4853]: I1122 07:58:14.014511 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerStarted","Data":"30211609d494c76c862b0e07745fea8af9dcedd56693c134adcf989f361aad9f"} Nov 22 07:58:17 crc kubenswrapper[4853]: I1122 07:58:17.053939 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerStarted","Data":"3b3dd7b622e9ea3d43cadd65811c0810faf31bb8fdc3e772d175e6180fc83d77"} Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.459374 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.462867 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.477157 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.630408 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.630551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.630617 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qf8j\" (UniqueName: \"kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.733488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.733587 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qf8j\" (UniqueName: \"kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.733842 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.734269 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.734314 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.757526 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qf8j\" (UniqueName: \"kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j\") pod \"community-operators-d8mrh\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:49 crc kubenswrapper[4853]: I1122 07:58:49.947697 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:58:50 crc kubenswrapper[4853]: I1122 07:58:50.558186 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:58:51 crc kubenswrapper[4853]: I1122 07:58:51.523591 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerStarted","Data":"e9eb4579aa052c1d1ddd0973142a279edfdbe7f62b1ce4f8d33e159ca6aea51e"} Nov 22 07:58:52 crc kubenswrapper[4853]: I1122 07:58:52.533144 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerStarted","Data":"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb"} Nov 22 07:58:53 crc kubenswrapper[4853]: I1122 07:58:53.546013 4853 generic.go:334] "Generic (PLEG): container finished" podID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerID="cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb" exitCode=0 Nov 22 07:58:53 crc kubenswrapper[4853]: I1122 07:58:53.546078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerDied","Data":"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb"} Nov 22 07:58:57 crc kubenswrapper[4853]: I1122 07:58:57.589503 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerStarted","Data":"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c"} Nov 22 07:58:58 crc kubenswrapper[4853]: I1122 07:58:58.602036 4853 generic.go:334] "Generic (PLEG): container finished" podID="782dde53-293a-409a-9516-f6e8ef463be0" containerID="3b3dd7b622e9ea3d43cadd65811c0810faf31bb8fdc3e772d175e6180fc83d77" exitCode=0 Nov 22 07:58:58 crc kubenswrapper[4853]: I1122 07:58:58.602116 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerDied","Data":"3b3dd7b622e9ea3d43cadd65811c0810faf31bb8fdc3e772d175e6180fc83d77"} Nov 22 07:59:00 crc kubenswrapper[4853]: I1122 07:59:00.084134 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c5tjs"] Nov 22 07:59:00 crc kubenswrapper[4853]: I1122 07:59:00.094968 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-c5tjs"] Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.064482 4853 scope.go:117] "RemoveContainer" containerID="9b54259d55869e27ba8f9e308f53955791c1604ec2b276eeb471d9425fefad38" Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.297534 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.297595 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.635906 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerStarted","Data":"fc95b151a35d41def3bdb691b3896b318a2932437b6186a43ec451f19e4d572b"} Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.670388 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dj4d" podStartSLOduration=2.361543611 podStartE2EDuration="49.670363024s" podCreationTimestamp="2025-11-22 07:58:12 +0000 UTC" firstStartedPulling="2025-11-22 07:58:14.016786749 +0000 UTC m=+2892.857409375" lastFinishedPulling="2025-11-22 07:59:01.325606162 +0000 UTC m=+2940.166228788" observedRunningTime="2025-11-22 07:59:01.668403141 +0000 UTC m=+2940.509025767" watchObservedRunningTime="2025-11-22 07:59:01.670363024 +0000 UTC m=+2940.510985650" Nov 22 07:59:01 crc kubenswrapper[4853]: I1122 07:59:01.774532 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bb7e8f-c36e-4027-b953-384bff85680b" path="/var/lib/kubelet/pods/c7bb7e8f-c36e-4027-b953-384bff85680b/volumes" Nov 22 07:59:02 crc kubenswrapper[4853]: I1122 07:59:02.669562 4853 generic.go:334] "Generic (PLEG): container finished" podID="134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" containerID="b3550429ae564b456e2917cfdf157241636d1d641b31af4c32bb1b3df4482c1d" exitCode=0 Nov 22 07:59:02 crc kubenswrapper[4853]: I1122 07:59:02.670170 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" event={"ID":"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a","Type":"ContainerDied","Data":"b3550429ae564b456e2917cfdf157241636d1d641b31af4c32bb1b3df4482c1d"} Nov 22 07:59:02 crc kubenswrapper[4853]: I1122 07:59:02.713088 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:02 crc kubenswrapper[4853]: I1122 07:59:02.713359 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:03 crc kubenswrapper[4853]: I1122 07:59:03.782502 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dj4d" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" probeResult="failure" output=< Nov 22 07:59:03 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:59:03 crc kubenswrapper[4853]: > Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.316396 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.463308 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle\") pod \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.463701 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory\") pod \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.463806 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key\") pod \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.463949 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp5l9\" (UniqueName: \"kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9\") pod \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\" (UID: \"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a\") " Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.508684 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9" (OuterVolumeSpecName: "kube-api-access-wp5l9") pod "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" (UID: "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a"). InnerVolumeSpecName "kube-api-access-wp5l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.509144 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" (UID: "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.567017 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp5l9\" (UniqueName: \"kubernetes.io/projected/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-kube-api-access-wp5l9\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.567069 4853 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.625961 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" (UID: "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.628941 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory" (OuterVolumeSpecName: "inventory") pod "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" (UID: "134d3ebf-3b18-46f5-b30e-7856a1a6bc6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.672703 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.672761 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/134d3ebf-3b18-46f5-b30e-7856a1a6bc6a-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.705698 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" event={"ID":"134d3ebf-3b18-46f5-b30e-7856a1a6bc6a","Type":"ContainerDied","Data":"72046e3a1578750148c1107dc143ed80fb9382f6585d0bb2b800f69ea0e9b4fa"} Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.705760 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72046e3a1578750148c1107dc143ed80fb9382f6585d0bb2b800f69ea0e9b4fa" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.705844 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.873181 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4"] Nov 22 07:59:04 crc kubenswrapper[4853]: E1122 07:59:04.873767 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.873782 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.874030 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="134d3ebf-3b18-46f5-b30e-7856a1a6bc6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.875032 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.879450 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.879592 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.879672 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.884284 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.900937 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4"] Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.995767 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.995874 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svrcr\" (UniqueName: \"kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:04 crc kubenswrapper[4853]: I1122 07:59:04.996032 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.099028 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.099121 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.099191 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svrcr\" (UniqueName: \"kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.102538 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.103601 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.118869 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svrcr\" (UniqueName: \"kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gspf4\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.220415 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 07:59:05 crc kubenswrapper[4853]: I1122 07:59:05.853194 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4"] Nov 22 07:59:06 crc kubenswrapper[4853]: I1122 07:59:06.729883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" event={"ID":"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d","Type":"ContainerStarted","Data":"59d83401d7a0df3934f7a8edb93770814a9b5dec549d373eaa9e48db4f1f4819"} Nov 22 07:59:07 crc kubenswrapper[4853]: I1122 07:59:07.741446 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" event={"ID":"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d","Type":"ContainerStarted","Data":"6651391e7cc6608594307b83dfee964c31fbe950a4f8111143b5e691d447bfd9"} Nov 22 07:59:07 crc kubenswrapper[4853]: I1122 07:59:07.779841 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" podStartSLOduration=2.629303704 podStartE2EDuration="3.77982056s" podCreationTimestamp="2025-11-22 07:59:04 +0000 UTC" firstStartedPulling="2025-11-22 07:59:05.879581675 +0000 UTC m=+2944.720204301" lastFinishedPulling="2025-11-22 07:59:07.030098541 +0000 UTC m=+2945.870721157" observedRunningTime="2025-11-22 07:59:07.768600389 +0000 UTC m=+2946.609223015" watchObservedRunningTime="2025-11-22 07:59:07.77982056 +0000 UTC m=+2946.620443206" Nov 22 07:59:09 crc kubenswrapper[4853]: I1122 07:59:09.055935 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4htd6"] Nov 22 07:59:09 crc kubenswrapper[4853]: I1122 07:59:09.108897 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4htd6"] Nov 22 07:59:09 crc kubenswrapper[4853]: I1122 07:59:09.773499 4853 generic.go:334] "Generic (PLEG): container finished" podID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerID="80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c" exitCode=0 Nov 22 07:59:09 crc kubenswrapper[4853]: I1122 07:59:09.775031 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="297f89ac-14c3-4918-bd7e-776cc229298c" path="/var/lib/kubelet/pods/297f89ac-14c3-4918-bd7e-776cc229298c/volumes" Nov 22 07:59:09 crc kubenswrapper[4853]: I1122 07:59:09.776208 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerDied","Data":"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c"} Nov 22 07:59:11 crc kubenswrapper[4853]: I1122 07:59:11.802834 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerStarted","Data":"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a"} Nov 22 07:59:11 crc kubenswrapper[4853]: I1122 07:59:11.836782 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d8mrh" podStartSLOduration=5.480163248 podStartE2EDuration="22.836734529s" podCreationTimestamp="2025-11-22 07:58:49 +0000 UTC" firstStartedPulling="2025-11-22 07:58:53.548668265 +0000 UTC m=+2932.389290901" lastFinishedPulling="2025-11-22 07:59:10.905239556 +0000 UTC m=+2949.745862182" observedRunningTime="2025-11-22 07:59:11.823929905 +0000 UTC m=+2950.664552551" watchObservedRunningTime="2025-11-22 07:59:11.836734529 +0000 UTC m=+2950.677357155" Nov 22 07:59:13 crc kubenswrapper[4853]: I1122 07:59:13.771883 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dj4d" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" probeResult="failure" output=< Nov 22 07:59:13 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:59:13 crc kubenswrapper[4853]: > Nov 22 07:59:19 crc kubenswrapper[4853]: I1122 07:59:19.948265 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:19 crc kubenswrapper[4853]: I1122 07:59:19.949489 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:21 crc kubenswrapper[4853]: I1122 07:59:21.007468 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d8mrh" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" probeResult="failure" output=< Nov 22 07:59:21 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:59:21 crc kubenswrapper[4853]: > Nov 22 07:59:22 crc kubenswrapper[4853]: I1122 07:59:22.780320 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:22 crc kubenswrapper[4853]: I1122 07:59:22.852737 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:23 crc kubenswrapper[4853]: I1122 07:59:23.041522 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:59:23 crc kubenswrapper[4853]: I1122 07:59:23.995822 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dj4d" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" containerID="cri-o://fc95b151a35d41def3bdb691b3896b318a2932437b6186a43ec451f19e4d572b" gracePeriod=2 Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.073239 4853 generic.go:334] "Generic (PLEG): container finished" podID="782dde53-293a-409a-9516-f6e8ef463be0" containerID="fc95b151a35d41def3bdb691b3896b318a2932437b6186a43ec451f19e4d572b" exitCode=0 Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.073638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerDied","Data":"fc95b151a35d41def3bdb691b3896b318a2932437b6186a43ec451f19e4d572b"} Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.229165 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.399147 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities\") pod \"782dde53-293a-409a-9516-f6e8ef463be0\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.399655 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkg4w\" (UniqueName: \"kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w\") pod \"782dde53-293a-409a-9516-f6e8ef463be0\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.399998 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content\") pod \"782dde53-293a-409a-9516-f6e8ef463be0\" (UID: \"782dde53-293a-409a-9516-f6e8ef463be0\") " Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.400315 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities" (OuterVolumeSpecName: "utilities") pod "782dde53-293a-409a-9516-f6e8ef463be0" (UID: "782dde53-293a-409a-9516-f6e8ef463be0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.401495 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.408402 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w" (OuterVolumeSpecName: "kube-api-access-fkg4w") pod "782dde53-293a-409a-9516-f6e8ef463be0" (UID: "782dde53-293a-409a-9516-f6e8ef463be0"). InnerVolumeSpecName "kube-api-access-fkg4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.503648 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkg4w\" (UniqueName: \"kubernetes.io/projected/782dde53-293a-409a-9516-f6e8ef463be0-kube-api-access-fkg4w\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.531237 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782dde53-293a-409a-9516-f6e8ef463be0" (UID: "782dde53-293a-409a-9516-f6e8ef463be0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:25 crc kubenswrapper[4853]: I1122 07:59:25.605919 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782dde53-293a-409a-9516-f6e8ef463be0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.091317 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dj4d" event={"ID":"782dde53-293a-409a-9516-f6e8ef463be0","Type":"ContainerDied","Data":"30211609d494c76c862b0e07745fea8af9dcedd56693c134adcf989f361aad9f"} Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.092072 4853 scope.go:117] "RemoveContainer" containerID="fc95b151a35d41def3bdb691b3896b318a2932437b6186a43ec451f19e4d572b" Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.091431 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dj4d" Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.120472 4853 scope.go:117] "RemoveContainer" containerID="3b3dd7b622e9ea3d43cadd65811c0810faf31bb8fdc3e772d175e6180fc83d77" Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.133150 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.145373 4853 scope.go:117] "RemoveContainer" containerID="d4597517d975aba49a5fba9ea05fc588bb937259177cc24a2a5ac0d821ef174e" Nov 22 07:59:26 crc kubenswrapper[4853]: I1122 07:59:26.146909 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dj4d"] Nov 22 07:59:27 crc kubenswrapper[4853]: I1122 07:59:27.771554 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782dde53-293a-409a-9516-f6e8ef463be0" path="/var/lib/kubelet/pods/782dde53-293a-409a-9516-f6e8ef463be0/volumes" Nov 22 07:59:31 crc kubenswrapper[4853]: I1122 07:59:31.007536 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d8mrh" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" probeResult="failure" output=< Nov 22 07:59:31 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:59:31 crc kubenswrapper[4853]: > Nov 22 07:59:31 crc kubenswrapper[4853]: I1122 07:59:31.297635 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:59:31 crc kubenswrapper[4853]: I1122 07:59:31.298229 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:59:36 crc kubenswrapper[4853]: I1122 07:59:36.049541 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cbsz9"] Nov 22 07:59:36 crc kubenswrapper[4853]: I1122 07:59:36.066784 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cbsz9"] Nov 22 07:59:37 crc kubenswrapper[4853]: I1122 07:59:37.763999 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ee627d-63e1-4a7f-9da3-aca02dcd4cec" path="/var/lib/kubelet/pods/42ee627d-63e1-4a7f-9da3-aca02dcd4cec/volumes" Nov 22 07:59:41 crc kubenswrapper[4853]: I1122 07:59:41.009517 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-d8mrh" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" probeResult="failure" output=< Nov 22 07:59:41 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 07:59:41 crc kubenswrapper[4853]: > Nov 22 07:59:42 crc kubenswrapper[4853]: I1122 07:59:42.039428 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-blcfh"] Nov 22 07:59:42 crc kubenswrapper[4853]: I1122 07:59:42.052290 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-blcfh"] Nov 22 07:59:43 crc kubenswrapper[4853]: I1122 07:59:43.763633 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c890fc-832a-4ab4-ad0f-5f41153efa12" path="/var/lib/kubelet/pods/41c890fc-832a-4ab4-ad0f-5f41153efa12/volumes" Nov 22 07:59:49 crc kubenswrapper[4853]: I1122 07:59:49.071123 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-c7fbj"] Nov 22 07:59:49 crc kubenswrapper[4853]: I1122 07:59:49.093824 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-c7fbj"] Nov 22 07:59:49 crc kubenswrapper[4853]: I1122 07:59:49.766318 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d23544dc-a7ee-4c28-8c1f-8d2faeaed66d" path="/var/lib/kubelet/pods/d23544dc-a7ee-4c28-8c1f-8d2faeaed66d/volumes" Nov 22 07:59:50 crc kubenswrapper[4853]: I1122 07:59:50.002693 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:50 crc kubenswrapper[4853]: I1122 07:59:50.061643 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-4e8a-account-create-jjgdk"] Nov 22 07:59:50 crc kubenswrapper[4853]: I1122 07:59:50.068559 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:50 crc kubenswrapper[4853]: I1122 07:59:50.075148 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-4e8a-account-create-jjgdk"] Nov 22 07:59:50 crc kubenswrapper[4853]: I1122 07:59:50.682170 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:59:51 crc kubenswrapper[4853]: I1122 07:59:51.402551 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d8mrh" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" containerID="cri-o://79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a" gracePeriod=2 Nov 22 07:59:51 crc kubenswrapper[4853]: I1122 07:59:51.765497 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f5b166-148f-4c68-b444-40babca8ba03" path="/var/lib/kubelet/pods/c6f5b166-148f-4c68-b444-40babca8ba03/volumes" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.031919 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.120434 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qf8j\" (UniqueName: \"kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j\") pod \"680a9639-5f25-4e1f-9041-5cf0988d05de\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.120589 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content\") pod \"680a9639-5f25-4e1f-9041-5cf0988d05de\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.120728 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities\") pod \"680a9639-5f25-4e1f-9041-5cf0988d05de\" (UID: \"680a9639-5f25-4e1f-9041-5cf0988d05de\") " Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.122029 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities" (OuterVolumeSpecName: "utilities") pod "680a9639-5f25-4e1f-9041-5cf0988d05de" (UID: "680a9639-5f25-4e1f-9041-5cf0988d05de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.127259 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j" (OuterVolumeSpecName: "kube-api-access-7qf8j") pod "680a9639-5f25-4e1f-9041-5cf0988d05de" (UID: "680a9639-5f25-4e1f-9041-5cf0988d05de"). InnerVolumeSpecName "kube-api-access-7qf8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.195640 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "680a9639-5f25-4e1f-9041-5cf0988d05de" (UID: "680a9639-5f25-4e1f-9041-5cf0988d05de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.224378 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qf8j\" (UniqueName: \"kubernetes.io/projected/680a9639-5f25-4e1f-9041-5cf0988d05de-kube-api-access-7qf8j\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.224441 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.224452 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680a9639-5f25-4e1f-9041-5cf0988d05de-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.418087 4853 generic.go:334] "Generic (PLEG): container finished" podID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerID="79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a" exitCode=0 Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.418165 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d8mrh" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.418190 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerDied","Data":"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a"} Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.419002 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d8mrh" event={"ID":"680a9639-5f25-4e1f-9041-5cf0988d05de","Type":"ContainerDied","Data":"e9eb4579aa052c1d1ddd0973142a279edfdbe7f62b1ce4f8d33e159ca6aea51e"} Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.419022 4853 scope.go:117] "RemoveContainer" containerID="79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.455202 4853 scope.go:117] "RemoveContainer" containerID="80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.464722 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.473788 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d8mrh"] Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.487696 4853 scope.go:117] "RemoveContainer" containerID="cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.542273 4853 scope.go:117] "RemoveContainer" containerID="79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a" Nov 22 07:59:52 crc kubenswrapper[4853]: E1122 07:59:52.543019 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a\": container with ID starting with 79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a not found: ID does not exist" containerID="79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.543105 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a"} err="failed to get container status \"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a\": rpc error: code = NotFound desc = could not find container \"79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a\": container with ID starting with 79edab9bc545397cc5a5c93a073c7432a4d7eed4502e7223e1695ec5998a417a not found: ID does not exist" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.543147 4853 scope.go:117] "RemoveContainer" containerID="80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c" Nov 22 07:59:52 crc kubenswrapper[4853]: E1122 07:59:52.543796 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c\": container with ID starting with 80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c not found: ID does not exist" containerID="80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.543841 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c"} err="failed to get container status \"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c\": rpc error: code = NotFound desc = could not find container \"80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c\": container with ID starting with 80fc60562f8ea401a5ba8a7109bbea555931ff85196b4f4dbbb9e4742b10ac9c not found: ID does not exist" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.543877 4853 scope.go:117] "RemoveContainer" containerID="cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb" Nov 22 07:59:52 crc kubenswrapper[4853]: E1122 07:59:52.544243 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb\": container with ID starting with cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb not found: ID does not exist" containerID="cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb" Nov 22 07:59:52 crc kubenswrapper[4853]: I1122 07:59:52.544282 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb"} err="failed to get container status \"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb\": rpc error: code = NotFound desc = could not find container \"cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb\": container with ID starting with cba687a952828b847b7958944ddd14cc03fefee705b66b91816c54aee12c92eb not found: ID does not exist" Nov 22 07:59:53 crc kubenswrapper[4853]: I1122 07:59:53.763535 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" path="/var/lib/kubelet/pods/680a9639-5f25-4e1f-9041-5cf0988d05de/volumes" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.166567 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs"] Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167488 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167500 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167544 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167550 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167564 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167570 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167587 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167593 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167604 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167610 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4853]: E1122 08:00:00.167624 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167631 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167840 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="782dde53-293a-409a-9516-f6e8ef463be0" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.167863 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="680a9639-5f25-4e1f-9041-5cf0988d05de" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.168631 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.172047 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.173618 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.188604 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs"] Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.345225 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.345829 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.346061 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zdnz\" (UniqueName: \"kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.449578 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.449845 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.449918 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zdnz\" (UniqueName: \"kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.450668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.468709 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.469386 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zdnz\" (UniqueName: \"kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz\") pod \"collect-profiles-29396640-t6gcs\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:00 crc kubenswrapper[4853]: I1122 08:00:00.507481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.019402 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs"] Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.297393 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.297516 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.297587 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.299193 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.299270 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd" gracePeriod=600 Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.336960 4853 scope.go:117] "RemoveContainer" containerID="a00fb8d47d57f5167eb191ed1e61f773c885900c90935674ac55ac783b8af83d" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.545372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" event={"ID":"c5aafdf2-b9e2-4c2a-b418-2493c8352c40","Type":"ContainerStarted","Data":"2429e3f8d55cf2d56eee4b2a997e06ecd13f382891e14d56063311e3d9a78eef"} Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.654952 4853 scope.go:117] "RemoveContainer" containerID="1c79bd2ec6606952eab447e586e1ed425169fd030fa8dd3ae56c467638c0a2d0" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.691764 4853 scope.go:117] "RemoveContainer" containerID="a4fce85f953a48f363537a181cfb1a4384fb876c100e2e32d58fa35ad92b866b" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.787673 4853 scope.go:117] "RemoveContainer" containerID="0a3d507cb8a93880955404c4d57ab7a986df4e07de719fdbad427bf9d98346f6" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.855320 4853 scope.go:117] "RemoveContainer" containerID="dcf56e335ecbb41ee55bd67167913c1cce60d9282cd45960933a48273bac10c8" Nov 22 08:00:01 crc kubenswrapper[4853]: I1122 08:00:01.906363 4853 scope.go:117] "RemoveContainer" containerID="968c3a1f5258b8690034ea040c86d74876232989b4ae9da48f84786963210af2" Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.571875 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd" exitCode=0 Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.572334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd"} Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.572375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c"} Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.572399 4853 scope.go:117] "RemoveContainer" containerID="f1c0ac83a6a857bb9ac64bb048075d436003939992592a9a9b2d45101ef2a2de" Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.576949 4853 generic.go:334] "Generic (PLEG): container finished" podID="c5aafdf2-b9e2-4c2a-b418-2493c8352c40" containerID="bc4347ff09ef41b0779c4a9b2e5ed2b5b08e7cab79a1bbfa90f1e173fd5d464b" exitCode=0 Nov 22 08:00:02 crc kubenswrapper[4853]: I1122 08:00:02.577003 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" event={"ID":"c5aafdf2-b9e2-4c2a-b418-2493c8352c40","Type":"ContainerDied","Data":"bc4347ff09ef41b0779c4a9b2e5ed2b5b08e7cab79a1bbfa90f1e173fd5d464b"} Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.061403 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.168365 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume\") pod \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.168449 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume\") pod \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.168509 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zdnz\" (UniqueName: \"kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz\") pod \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\" (UID: \"c5aafdf2-b9e2-4c2a-b418-2493c8352c40\") " Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.169362 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5aafdf2-b9e2-4c2a-b418-2493c8352c40" (UID: "c5aafdf2-b9e2-4c2a-b418-2493c8352c40"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.177251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5aafdf2-b9e2-4c2a-b418-2493c8352c40" (UID: "c5aafdf2-b9e2-4c2a-b418-2493c8352c40"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.177309 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz" (OuterVolumeSpecName: "kube-api-access-2zdnz") pod "c5aafdf2-b9e2-4c2a-b418-2493c8352c40" (UID: "c5aafdf2-b9e2-4c2a-b418-2493c8352c40"). InnerVolumeSpecName "kube-api-access-2zdnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.272567 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.273121 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.273140 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zdnz\" (UniqueName: \"kubernetes.io/projected/c5aafdf2-b9e2-4c2a-b418-2493c8352c40-kube-api-access-2zdnz\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.608990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" event={"ID":"c5aafdf2-b9e2-4c2a-b418-2493c8352c40","Type":"ContainerDied","Data":"2429e3f8d55cf2d56eee4b2a997e06ecd13f382891e14d56063311e3d9a78eef"} Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.609029 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2429e3f8d55cf2d56eee4b2a997e06ecd13f382891e14d56063311e3d9a78eef" Nov 22 08:00:04 crc kubenswrapper[4853]: I1122 08:00:04.609063 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs" Nov 22 08:00:05 crc kubenswrapper[4853]: I1122 08:00:05.149178 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh"] Nov 22 08:00:05 crc kubenswrapper[4853]: I1122 08:00:05.158764 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-r8svh"] Nov 22 08:00:05 crc kubenswrapper[4853]: I1122 08:00:05.778685 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d65d22f-c53e-4a25-9571-3bbb65e04d66" path="/var/lib/kubelet/pods/8d65d22f-c53e-4a25-9571-3bbb65e04d66/volumes" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.829243 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:34 crc kubenswrapper[4853]: E1122 08:00:34.830157 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5aafdf2-b9e2-4c2a-b418-2493c8352c40" containerName="collect-profiles" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.830169 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5aafdf2-b9e2-4c2a-b418-2493c8352c40" containerName="collect-profiles" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.830399 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5aafdf2-b9e2-4c2a-b418-2493c8352c40" containerName="collect-profiles" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.832890 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.875560 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.991021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.991283 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wf7z\" (UniqueName: \"kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:34 crc kubenswrapper[4853]: I1122 08:00:34.991646 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.093809 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wf7z\" (UniqueName: \"kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.094020 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.094108 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.094549 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.094867 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.132010 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wf7z\" (UniqueName: \"kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z\") pod \"certified-operators-7qr5d\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.194610 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.811365 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:35 crc kubenswrapper[4853]: I1122 08:00:35.979126 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerStarted","Data":"0248f908807eca82176a6832097654f754aef87b2a056ab64323f581ec711643"} Nov 22 08:00:36 crc kubenswrapper[4853]: I1122 08:00:36.998166 4853 generic.go:334] "Generic (PLEG): container finished" podID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerID="8067a5221a2eb31ca55ba599c6579f3643e2bc22a9b34a3e99c421c5cd79a46e" exitCode=0 Nov 22 08:00:36 crc kubenswrapper[4853]: I1122 08:00:36.998293 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerDied","Data":"8067a5221a2eb31ca55ba599c6579f3643e2bc22a9b34a3e99c421c5cd79a46e"} Nov 22 08:00:39 crc kubenswrapper[4853]: I1122 08:00:39.036304 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerStarted","Data":"efd64b8dd44d5ae48d2a9d15de7f92a173a4d73c48eea146e5f1fd3fbbde86ec"} Nov 22 08:00:42 crc kubenswrapper[4853]: I1122 08:00:42.071737 4853 generic.go:334] "Generic (PLEG): container finished" podID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerID="efd64b8dd44d5ae48d2a9d15de7f92a173a4d73c48eea146e5f1fd3fbbde86ec" exitCode=0 Nov 22 08:00:42 crc kubenswrapper[4853]: I1122 08:00:42.071799 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerDied","Data":"efd64b8dd44d5ae48d2a9d15de7f92a173a4d73c48eea146e5f1fd3fbbde86ec"} Nov 22 08:00:44 crc kubenswrapper[4853]: I1122 08:00:44.102533 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerStarted","Data":"204d18efe9e7db9b19227dbfe4c49edc72de6bb1e770fdb4ec4752e314f67b71"} Nov 22 08:00:44 crc kubenswrapper[4853]: I1122 08:00:44.128520 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7qr5d" podStartSLOduration=4.390230325 podStartE2EDuration="10.12847381s" podCreationTimestamp="2025-11-22 08:00:34 +0000 UTC" firstStartedPulling="2025-11-22 08:00:37.001096643 +0000 UTC m=+3035.841719269" lastFinishedPulling="2025-11-22 08:00:42.739340128 +0000 UTC m=+3041.579962754" observedRunningTime="2025-11-22 08:00:44.128003777 +0000 UTC m=+3042.968626413" watchObservedRunningTime="2025-11-22 08:00:44.12847381 +0000 UTC m=+3042.969096436" Nov 22 08:00:45 crc kubenswrapper[4853]: I1122 08:00:45.195510 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:45 crc kubenswrapper[4853]: I1122 08:00:45.195942 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:45 crc kubenswrapper[4853]: I1122 08:00:45.245505 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:55 crc kubenswrapper[4853]: I1122 08:00:55.258442 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:55 crc kubenswrapper[4853]: I1122 08:00:55.327459 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:56 crc kubenswrapper[4853]: I1122 08:00:56.250909 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7qr5d" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="registry-server" containerID="cri-o://204d18efe9e7db9b19227dbfe4c49edc72de6bb1e770fdb4ec4752e314f67b71" gracePeriod=2 Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.273801 4853 generic.go:334] "Generic (PLEG): container finished" podID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerID="204d18efe9e7db9b19227dbfe4c49edc72de6bb1e770fdb4ec4752e314f67b71" exitCode=0 Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.273924 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerDied","Data":"204d18efe9e7db9b19227dbfe4c49edc72de6bb1e770fdb4ec4752e314f67b71"} Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.665647 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.791576 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content\") pod \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.792030 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities\") pod \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.792106 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wf7z\" (UniqueName: \"kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z\") pod \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\" (UID: \"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc\") " Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.793461 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities" (OuterVolumeSpecName: "utilities") pod "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" (UID: "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.800233 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z" (OuterVolumeSpecName: "kube-api-access-5wf7z") pod "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" (UID: "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc"). InnerVolumeSpecName "kube-api-access-5wf7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.849819 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" (UID: "0c4cc19c-c819-4da9-b3ab-be9c7e3872dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.896817 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.896867 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wf7z\" (UniqueName: \"kubernetes.io/projected/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-kube-api-access-5wf7z\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:57 crc kubenswrapper[4853]: I1122 08:00:57.896901 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.288016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7qr5d" event={"ID":"0c4cc19c-c819-4da9-b3ab-be9c7e3872dc","Type":"ContainerDied","Data":"0248f908807eca82176a6832097654f754aef87b2a056ab64323f581ec711643"} Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.288391 4853 scope.go:117] "RemoveContainer" containerID="204d18efe9e7db9b19227dbfe4c49edc72de6bb1e770fdb4ec4752e314f67b71" Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.288179 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7qr5d" Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.315720 4853 scope.go:117] "RemoveContainer" containerID="efd64b8dd44d5ae48d2a9d15de7f92a173a4d73c48eea146e5f1fd3fbbde86ec" Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.339098 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.349005 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7qr5d"] Nov 22 08:00:58 crc kubenswrapper[4853]: I1122 08:00:58.359779 4853 scope.go:117] "RemoveContainer" containerID="8067a5221a2eb31ca55ba599c6579f3643e2bc22a9b34a3e99c421c5cd79a46e" Nov 22 08:00:59 crc kubenswrapper[4853]: I1122 08:00:59.769473 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" path="/var/lib/kubelet/pods/0c4cc19c-c819-4da9-b3ab-be9c7e3872dc/volumes" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.151488 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396641-szmsx"] Nov 22 08:01:00 crc kubenswrapper[4853]: E1122 08:01:00.152151 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="registry-server" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.152184 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="registry-server" Nov 22 08:01:00 crc kubenswrapper[4853]: E1122 08:01:00.152210 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="extract-content" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.152219 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="extract-content" Nov 22 08:01:00 crc kubenswrapper[4853]: E1122 08:01:00.152239 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="extract-utilities" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.152247 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="extract-utilities" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.152555 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4cc19c-c819-4da9-b3ab-be9c7e3872dc" containerName="registry-server" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.153565 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.176240 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396641-szmsx"] Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.254384 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nsks\" (UniqueName: \"kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.254799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.254872 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.254975 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.357252 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nsks\" (UniqueName: \"kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.357372 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.357407 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.357445 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.363513 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.364148 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.370942 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.375677 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nsks\" (UniqueName: \"kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks\") pod \"keystone-cron-29396641-szmsx\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:00 crc kubenswrapper[4853]: I1122 08:01:00.497098 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:01 crc kubenswrapper[4853]: I1122 08:01:01.095664 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396641-szmsx"] Nov 22 08:01:01 crc kubenswrapper[4853]: I1122 08:01:01.325724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396641-szmsx" event={"ID":"12bcd8e0-a04b-49b7-a158-46e8da15bc48","Type":"ContainerStarted","Data":"64a1ea505eb721ba4c84a2e5133700334425ac153c8f2223ba72529eebcac35e"} Nov 22 08:01:02 crc kubenswrapper[4853]: I1122 08:01:02.134732 4853 scope.go:117] "RemoveContainer" containerID="88e958cfcf7fe586bd93929691c7dd38d777f4d5878426723e44157c988b40e7" Nov 22 08:01:02 crc kubenswrapper[4853]: I1122 08:01:02.336523 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396641-szmsx" event={"ID":"12bcd8e0-a04b-49b7-a158-46e8da15bc48","Type":"ContainerStarted","Data":"23c684ef1dffff88a123872bfae26d99cdbb2792af714dca17ffe46b7432d64b"} Nov 22 08:01:02 crc kubenswrapper[4853]: I1122 08:01:02.362569 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396641-szmsx" podStartSLOduration=2.362549371 podStartE2EDuration="2.362549371s" podCreationTimestamp="2025-11-22 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:01:02.349219081 +0000 UTC m=+3061.189841707" watchObservedRunningTime="2025-11-22 08:01:02.362549371 +0000 UTC m=+3061.203171997" Nov 22 08:01:08 crc kubenswrapper[4853]: I1122 08:01:08.414608 4853 generic.go:334] "Generic (PLEG): container finished" podID="12bcd8e0-a04b-49b7-a158-46e8da15bc48" containerID="23c684ef1dffff88a123872bfae26d99cdbb2792af714dca17ffe46b7432d64b" exitCode=0 Nov 22 08:01:08 crc kubenswrapper[4853]: I1122 08:01:08.414701 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396641-szmsx" event={"ID":"12bcd8e0-a04b-49b7-a158-46e8da15bc48","Type":"ContainerDied","Data":"23c684ef1dffff88a123872bfae26d99cdbb2792af714dca17ffe46b7432d64b"} Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.052794 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-2vkjv"] Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.065021 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-2vkjv"] Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.765165 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66ddea4d-3125-44f0-8855-75935dc4b640" path="/var/lib/kubelet/pods/66ddea4d-3125-44f0-8855-75935dc4b640/volumes" Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.846183 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.933273 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nsks\" (UniqueName: \"kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks\") pod \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.933354 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle\") pod \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.933496 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data\") pod \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.933814 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys\") pod \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\" (UID: \"12bcd8e0-a04b-49b7-a158-46e8da15bc48\") " Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.941600 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks" (OuterVolumeSpecName: "kube-api-access-7nsks") pod "12bcd8e0-a04b-49b7-a158-46e8da15bc48" (UID: "12bcd8e0-a04b-49b7-a158-46e8da15bc48"). InnerVolumeSpecName "kube-api-access-7nsks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.941732 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "12bcd8e0-a04b-49b7-a158-46e8da15bc48" (UID: "12bcd8e0-a04b-49b7-a158-46e8da15bc48"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:01:09 crc kubenswrapper[4853]: I1122 08:01:09.969164 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12bcd8e0-a04b-49b7-a158-46e8da15bc48" (UID: "12bcd8e0-a04b-49b7-a158-46e8da15bc48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.015678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data" (OuterVolumeSpecName: "config-data") pod "12bcd8e0-a04b-49b7-a158-46e8da15bc48" (UID: "12bcd8e0-a04b-49b7-a158-46e8da15bc48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.038274 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.038526 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nsks\" (UniqueName: \"kubernetes.io/projected/12bcd8e0-a04b-49b7-a158-46e8da15bc48-kube-api-access-7nsks\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.038545 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.038558 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12bcd8e0-a04b-49b7-a158-46e8da15bc48-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.441359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396641-szmsx" event={"ID":"12bcd8e0-a04b-49b7-a158-46e8da15bc48","Type":"ContainerDied","Data":"64a1ea505eb721ba4c84a2e5133700334425ac153c8f2223ba72529eebcac35e"} Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.441690 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64a1ea505eb721ba4c84a2e5133700334425ac153c8f2223ba72529eebcac35e" Nov 22 08:01:10 crc kubenswrapper[4853]: I1122 08:01:10.441495 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396641-szmsx" Nov 22 08:01:41 crc kubenswrapper[4853]: I1122 08:01:41.830933 4853 generic.go:334] "Generic (PLEG): container finished" podID="eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" containerID="6651391e7cc6608594307b83dfee964c31fbe950a4f8111143b5e691d447bfd9" exitCode=0 Nov 22 08:01:41 crc kubenswrapper[4853]: I1122 08:01:41.831013 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" event={"ID":"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d","Type":"ContainerDied","Data":"6651391e7cc6608594307b83dfee964c31fbe950a4f8111143b5e691d447bfd9"} Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.288056 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.436415 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svrcr\" (UniqueName: \"kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr\") pod \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.436655 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory\") pod \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.436900 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key\") pod \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\" (UID: \"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d\") " Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.452065 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr" (OuterVolumeSpecName: "kube-api-access-svrcr") pod "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" (UID: "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d"). InnerVolumeSpecName "kube-api-access-svrcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.470506 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory" (OuterVolumeSpecName: "inventory") pod "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" (UID: "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.490598 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" (UID: "eb7c2a78-a864-4f26-ae10-e2f64ff95b0d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.541486 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.541531 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.541545 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svrcr\" (UniqueName: \"kubernetes.io/projected/eb7c2a78-a864-4f26-ae10-e2f64ff95b0d-kube-api-access-svrcr\") on node \"crc\" DevicePath \"\"" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.858587 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" event={"ID":"eb7c2a78-a864-4f26-ae10-e2f64ff95b0d","Type":"ContainerDied","Data":"59d83401d7a0df3934f7a8edb93770814a9b5dec549d373eaa9e48db4f1f4819"} Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.858642 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d83401d7a0df3934f7a8edb93770814a9b5dec549d373eaa9e48db4f1f4819" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.858707 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gspf4" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.960000 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2"] Nov 22 08:01:43 crc kubenswrapper[4853]: E1122 08:01:43.960500 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.960518 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 22 08:01:43 crc kubenswrapper[4853]: E1122 08:01:43.960577 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12bcd8e0-a04b-49b7-a158-46e8da15bc48" containerName="keystone-cron" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.960585 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="12bcd8e0-a04b-49b7-a158-46e8da15bc48" containerName="keystone-cron" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.960847 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb7c2a78-a864-4f26-ae10-e2f64ff95b0d" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.960899 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="12bcd8e0-a04b-49b7-a158-46e8da15bc48" containerName="keystone-cron" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.969099 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.972251 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2"] Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.982854 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.983368 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.983529 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:01:43 crc kubenswrapper[4853]: I1122 08:01:43.986456 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.060767 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.060913 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.060935 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pd8l\" (UniqueName: \"kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.164257 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.164475 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.165121 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pd8l\" (UniqueName: \"kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.170772 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.171668 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.187050 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pd8l\" (UniqueName: \"kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-b44f2\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.314904 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.848902 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2"] Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.851229 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:01:44 crc kubenswrapper[4853]: I1122 08:01:44.869419 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" event={"ID":"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6","Type":"ContainerStarted","Data":"ca7503dad82da49c516df7a714e9ceb91e79b21e26cd5b47de9c894fb05b6cf4"} Nov 22 08:01:46 crc kubenswrapper[4853]: I1122 08:01:46.892360 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" event={"ID":"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6","Type":"ContainerStarted","Data":"83148ee62afce37094f7886c9e09a11133b840d0a4ebf73eb0576df129d6d21b"} Nov 22 08:02:02 crc kubenswrapper[4853]: I1122 08:02:02.240991 4853 scope.go:117] "RemoveContainer" containerID="7c1a164720513825e952fb82423aca4793f6511c28309701ca502012b05992cb" Nov 22 08:02:31 crc kubenswrapper[4853]: I1122 08:02:31.298176 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:02:31 crc kubenswrapper[4853]: I1122 08:02:31.299304 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:02:56 crc kubenswrapper[4853]: I1122 08:02:56.045601 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" podStartSLOduration=71.717869564 podStartE2EDuration="1m13.045579808s" podCreationTimestamp="2025-11-22 08:01:43 +0000 UTC" firstStartedPulling="2025-11-22 08:01:44.851006856 +0000 UTC m=+3103.691629482" lastFinishedPulling="2025-11-22 08:01:46.1787171 +0000 UTC m=+3105.019339726" observedRunningTime="2025-11-22 08:01:46.917009717 +0000 UTC m=+3105.757632333" watchObservedRunningTime="2025-11-22 08:02:56.045579808 +0000 UTC m=+3174.886202434" Nov 22 08:02:56 crc kubenswrapper[4853]: I1122 08:02:56.049441 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-sbcxc"] Nov 22 08:02:56 crc kubenswrapper[4853]: I1122 08:02:56.059082 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-sbcxc"] Nov 22 08:02:57 crc kubenswrapper[4853]: I1122 08:02:57.774525 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="713a48af-8f99-42ce-ba64-25dd0645ef66" path="/var/lib/kubelet/pods/713a48af-8f99-42ce-ba64-25dd0645ef66/volumes" Nov 22 08:02:59 crc kubenswrapper[4853]: I1122 08:02:59.703888 4853 generic.go:334] "Generic (PLEG): container finished" podID="35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" containerID="83148ee62afce37094f7886c9e09a11133b840d0a4ebf73eb0576df129d6d21b" exitCode=0 Nov 22 08:02:59 crc kubenswrapper[4853]: I1122 08:02:59.703983 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" event={"ID":"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6","Type":"ContainerDied","Data":"83148ee62afce37094f7886c9e09a11133b840d0a4ebf73eb0576df129d6d21b"} Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.181073 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.297804 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.297870 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.309671 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key\") pod \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.309840 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") pod \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.309952 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pd8l\" (UniqueName: \"kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l\") pod \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.320126 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l" (OuterVolumeSpecName: "kube-api-access-4pd8l") pod "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" (UID: "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6"). InnerVolumeSpecName "kube-api-access-4pd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:03:01 crc kubenswrapper[4853]: E1122 08:03:01.344760 4853 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory podName:35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6 nodeName:}" failed. No retries permitted until 2025-11-22 08:03:01.844704386 +0000 UTC m=+3180.685327002 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory") pod "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" (UID: "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6") : error deleting /var/lib/kubelet/pods/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6/volume-subpaths: remove /var/lib/kubelet/pods/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6/volume-subpaths: no such file or directory Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.348201 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" (UID: "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.413721 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.413766 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pd8l\" (UniqueName: \"kubernetes.io/projected/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-kube-api-access-4pd8l\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.724996 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" event={"ID":"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6","Type":"ContainerDied","Data":"ca7503dad82da49c516df7a714e9ceb91e79b21e26cd5b47de9c894fb05b6cf4"} Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.725296 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca7503dad82da49c516df7a714e9ceb91e79b21e26cd5b47de9c894fb05b6cf4" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.725087 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-b44f2" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.841820 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx"] Nov 22 08:03:01 crc kubenswrapper[4853]: E1122 08:03:01.842402 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.842426 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.842729 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.843866 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.851278 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx"] Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.926040 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") pod \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\" (UID: \"35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6\") " Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.926708 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.926816 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.926905 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bqq\" (UniqueName: \"kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:01 crc kubenswrapper[4853]: I1122 08:03:01.929629 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory" (OuterVolumeSpecName: "inventory") pod "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6" (UID: "35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.029500 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.029571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24bqq\" (UniqueName: \"kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.029821 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.030574 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.033491 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.033865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.048224 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24bqq\" (UniqueName: \"kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.175157 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.309734 4853 scope.go:117] "RemoveContainer" containerID="220bf2c31587e11941cd379228418c10aad5a78d087483097057f4e2beabf8e7" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.332011 4853 scope.go:117] "RemoveContainer" containerID="fb916ec7786ebd4be526a731fceb782c5e0a6aef7a17c519b0e436df98f4bc38" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.391343 4853 scope.go:117] "RemoveContainer" containerID="8cf21df1c11c16275c31d8ffdfad399208afd7429ea1d259e91c3b1bbc70cb0d" Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.473241 4853 scope.go:117] "RemoveContainer" containerID="b9a02ec9c71e36b5a5fea60e2b71703db3d0a7cbde2e95361087fd3570d5b017" Nov 22 08:03:02 crc kubenswrapper[4853]: W1122 08:03:02.687497 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04ca2a66_41e4_4a7b_8df7_8fcf34adeb8b.slice/crio-6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63 WatchSource:0}: Error finding container 6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63: Status 404 returned error can't find the container with id 6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63 Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.691245 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx"] Nov 22 08:03:02 crc kubenswrapper[4853]: I1122 08:03:02.736545 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" event={"ID":"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b","Type":"ContainerStarted","Data":"6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63"} Nov 22 08:03:03 crc kubenswrapper[4853]: I1122 08:03:03.770629 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" event={"ID":"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b","Type":"ContainerStarted","Data":"574fe8a6ccdeae7c205af5ad61c1a10b690f69b086b95d9b117dcc3e5bf7882e"} Nov 22 08:03:03 crc kubenswrapper[4853]: I1122 08:03:03.788329 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" podStartSLOduration=2.144665702 podStartE2EDuration="2.788282872s" podCreationTimestamp="2025-11-22 08:03:01 +0000 UTC" firstStartedPulling="2025-11-22 08:03:02.68986028 +0000 UTC m=+3181.530482906" lastFinishedPulling="2025-11-22 08:03:03.33347745 +0000 UTC m=+3182.174100076" observedRunningTime="2025-11-22 08:03:03.777207512 +0000 UTC m=+3182.617830148" watchObservedRunningTime="2025-11-22 08:03:03.788282872 +0000 UTC m=+3182.628905498" Nov 22 08:03:08 crc kubenswrapper[4853]: I1122 08:03:08.837704 4853 generic.go:334] "Generic (PLEG): container finished" podID="04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" containerID="574fe8a6ccdeae7c205af5ad61c1a10b690f69b086b95d9b117dcc3e5bf7882e" exitCode=0 Nov 22 08:03:08 crc kubenswrapper[4853]: I1122 08:03:08.838345 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" event={"ID":"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b","Type":"ContainerDied","Data":"574fe8a6ccdeae7c205af5ad61c1a10b690f69b086b95d9b117dcc3e5bf7882e"} Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.305643 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.345943 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory\") pod \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.346296 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24bqq\" (UniqueName: \"kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq\") pod \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.346345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key\") pod \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\" (UID: \"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b\") " Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.361081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq" (OuterVolumeSpecName: "kube-api-access-24bqq") pod "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" (UID: "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b"). InnerVolumeSpecName "kube-api-access-24bqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.384080 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory" (OuterVolumeSpecName: "inventory") pod "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" (UID: "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.385816 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" (UID: "04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.448652 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.448688 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24bqq\" (UniqueName: \"kubernetes.io/projected/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-kube-api-access-24bqq\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.448698 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.863885 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" event={"ID":"04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b","Type":"ContainerDied","Data":"6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63"} Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.863935 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd32664c9c2b51f9c18a429aaa3c9515360b5c23cab643c705fbc467f528e63" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.863984 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.936005 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r"] Nov 22 08:03:10 crc kubenswrapper[4853]: E1122 08:03:10.936685 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.936817 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.937155 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.938122 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.940085 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.940430 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.940524 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.944822 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.950539 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r"] Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.961666 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.961718 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr4n9\" (UniqueName: \"kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:10 crc kubenswrapper[4853]: I1122 08:03:10.962130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.065415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.065457 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr4n9\" (UniqueName: \"kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.065529 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.069145 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.082210 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.083254 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr4n9\" (UniqueName: \"kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6n78r\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.261257 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.793207 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r"] Nov 22 08:03:11 crc kubenswrapper[4853]: I1122 08:03:11.877989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" event={"ID":"77023bdf-69ac-4065-b6de-af12e3477fd9","Type":"ContainerStarted","Data":"c03967dd2a18694e4001ecd979cdcb3f902af2cfa48e1495b91e7e61e13c0df8"} Nov 22 08:03:13 crc kubenswrapper[4853]: I1122 08:03:13.901471 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" event={"ID":"77023bdf-69ac-4065-b6de-af12e3477fd9","Type":"ContainerStarted","Data":"0ece8aabc5b39ba37b80d19e4c7977106061991f6563e6f16ed594e1fb256c6b"} Nov 22 08:03:13 crc kubenswrapper[4853]: I1122 08:03:13.926522 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" podStartSLOduration=3.064360123 podStartE2EDuration="3.926499404s" podCreationTimestamp="2025-11-22 08:03:10 +0000 UTC" firstStartedPulling="2025-11-22 08:03:11.801574362 +0000 UTC m=+3190.642196978" lastFinishedPulling="2025-11-22 08:03:12.663713633 +0000 UTC m=+3191.504336259" observedRunningTime="2025-11-22 08:03:13.923322808 +0000 UTC m=+3192.763945434" watchObservedRunningTime="2025-11-22 08:03:13.926499404 +0000 UTC m=+3192.767122030" Nov 22 08:03:31 crc kubenswrapper[4853]: I1122 08:03:31.297232 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:03:31 crc kubenswrapper[4853]: I1122 08:03:31.297904 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:03:31 crc kubenswrapper[4853]: I1122 08:03:31.297957 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:03:31 crc kubenswrapper[4853]: I1122 08:03:31.298669 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:03:31 crc kubenswrapper[4853]: I1122 08:03:31.298724 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" gracePeriod=600 Nov 22 08:03:31 crc kubenswrapper[4853]: E1122 08:03:31.432203 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:03:32 crc kubenswrapper[4853]: I1122 08:03:32.103184 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" exitCode=0 Nov 22 08:03:32 crc kubenswrapper[4853]: I1122 08:03:32.103276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c"} Nov 22 08:03:32 crc kubenswrapper[4853]: I1122 08:03:32.103582 4853 scope.go:117] "RemoveContainer" containerID="5b5dbaa1649c53a81854e516978dad56264c7a832b92f5fd324ac74aac9f63cd" Nov 22 08:03:32 crc kubenswrapper[4853]: I1122 08:03:32.106482 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:03:32 crc kubenswrapper[4853]: E1122 08:03:32.106954 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:03:42 crc kubenswrapper[4853]: I1122 08:03:42.748349 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:03:42 crc kubenswrapper[4853]: E1122 08:03:42.749267 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:03:52 crc kubenswrapper[4853]: I1122 08:03:52.371579 4853 generic.go:334] "Generic (PLEG): container finished" podID="77023bdf-69ac-4065-b6de-af12e3477fd9" containerID="0ece8aabc5b39ba37b80d19e4c7977106061991f6563e6f16ed594e1fb256c6b" exitCode=0 Nov 22 08:03:52 crc kubenswrapper[4853]: I1122 08:03:52.371979 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" event={"ID":"77023bdf-69ac-4065-b6de-af12e3477fd9","Type":"ContainerDied","Data":"0ece8aabc5b39ba37b80d19e4c7977106061991f6563e6f16ed594e1fb256c6b"} Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.905994 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.949941 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key\") pod \"77023bdf-69ac-4065-b6de-af12e3477fd9\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.950097 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory\") pod \"77023bdf-69ac-4065-b6de-af12e3477fd9\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.950325 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr4n9\" (UniqueName: \"kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9\") pod \"77023bdf-69ac-4065-b6de-af12e3477fd9\" (UID: \"77023bdf-69ac-4065-b6de-af12e3477fd9\") " Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.969094 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9" (OuterVolumeSpecName: "kube-api-access-dr4n9") pod "77023bdf-69ac-4065-b6de-af12e3477fd9" (UID: "77023bdf-69ac-4065-b6de-af12e3477fd9"). InnerVolumeSpecName "kube-api-access-dr4n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.987638 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "77023bdf-69ac-4065-b6de-af12e3477fd9" (UID: "77023bdf-69ac-4065-b6de-af12e3477fd9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:53 crc kubenswrapper[4853]: I1122 08:03:53.999369 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory" (OuterVolumeSpecName: "inventory") pod "77023bdf-69ac-4065-b6de-af12e3477fd9" (UID: "77023bdf-69ac-4065-b6de-af12e3477fd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.052482 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.052522 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/77023bdf-69ac-4065-b6de-af12e3477fd9-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.052532 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr4n9\" (UniqueName: \"kubernetes.io/projected/77023bdf-69ac-4065-b6de-af12e3477fd9-kube-api-access-dr4n9\") on node \"crc\" DevicePath \"\"" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.398274 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" event={"ID":"77023bdf-69ac-4065-b6de-af12e3477fd9","Type":"ContainerDied","Data":"c03967dd2a18694e4001ecd979cdcb3f902af2cfa48e1495b91e7e61e13c0df8"} Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.398541 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03967dd2a18694e4001ecd979cdcb3f902af2cfa48e1495b91e7e61e13c0df8" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.398420 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6n78r" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.485073 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk"] Nov 22 08:03:54 crc kubenswrapper[4853]: E1122 08:03:54.485642 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77023bdf-69ac-4065-b6de-af12e3477fd9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.485663 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="77023bdf-69ac-4065-b6de-af12e3477fd9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.485932 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="77023bdf-69ac-4065-b6de-af12e3477fd9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.486774 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.488872 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.493564 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.494048 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.494161 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.498317 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk"] Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.568946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.569021 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.569096 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rkwt\" (UniqueName: \"kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.671604 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.671900 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.671977 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rkwt\" (UniqueName: \"kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.676146 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.676153 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.693710 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rkwt\" (UniqueName: \"kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:54 crc kubenswrapper[4853]: I1122 08:03:54.806322 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:03:55 crc kubenswrapper[4853]: I1122 08:03:55.386294 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk"] Nov 22 08:03:55 crc kubenswrapper[4853]: I1122 08:03:55.418604 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" event={"ID":"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992","Type":"ContainerStarted","Data":"42e2d24461ea5ecee6cf9aa79686cb08a75f023abf8c2ac3f26716e8ca5b71cc"} Nov 22 08:03:56 crc kubenswrapper[4853]: I1122 08:03:56.437371 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" event={"ID":"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992","Type":"ContainerStarted","Data":"14df0282a46ecbe3e2529ebdc69a13ea22f1c763d27cbf4351ef9c4e71dfd9d7"} Nov 22 08:03:56 crc kubenswrapper[4853]: I1122 08:03:56.457949 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" podStartSLOduration=1.979713614 podStartE2EDuration="2.457913148s" podCreationTimestamp="2025-11-22 08:03:54 +0000 UTC" firstStartedPulling="2025-11-22 08:03:55.397591855 +0000 UTC m=+3234.238214481" lastFinishedPulling="2025-11-22 08:03:55.875791389 +0000 UTC m=+3234.716414015" observedRunningTime="2025-11-22 08:03:56.45503481 +0000 UTC m=+3235.295657456" watchObservedRunningTime="2025-11-22 08:03:56.457913148 +0000 UTC m=+3235.298535864" Nov 22 08:03:56 crc kubenswrapper[4853]: I1122 08:03:56.750263 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:03:56 crc kubenswrapper[4853]: E1122 08:03:56.750944 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:04:10 crc kubenswrapper[4853]: I1122 08:04:10.749426 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:04:10 crc kubenswrapper[4853]: E1122 08:04:10.750440 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:04:24 crc kubenswrapper[4853]: I1122 08:04:24.747392 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:04:24 crc kubenswrapper[4853]: E1122 08:04:24.748429 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:04:27 crc kubenswrapper[4853]: I1122 08:04:27.046677 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-4z67v"] Nov 22 08:04:27 crc kubenswrapper[4853]: I1122 08:04:27.058063 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-4z67v"] Nov 22 08:04:27 crc kubenswrapper[4853]: I1122 08:04:27.775176 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9358cca5-2c9a-4ada-b9df-58fc71aa8fed" path="/var/lib/kubelet/pods/9358cca5-2c9a-4ada-b9df-58fc71aa8fed/volumes" Nov 22 08:04:38 crc kubenswrapper[4853]: I1122 08:04:38.748680 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:04:38 crc kubenswrapper[4853]: E1122 08:04:38.749491 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:04:44 crc kubenswrapper[4853]: I1122 08:04:44.986409 4853 generic.go:334] "Generic (PLEG): container finished" podID="9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" containerID="14df0282a46ecbe3e2529ebdc69a13ea22f1c763d27cbf4351ef9c4e71dfd9d7" exitCode=0 Nov 22 08:04:44 crc kubenswrapper[4853]: I1122 08:04:44.986481 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" event={"ID":"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992","Type":"ContainerDied","Data":"14df0282a46ecbe3e2529ebdc69a13ea22f1c763d27cbf4351ef9c4e71dfd9d7"} Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.496607 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.686780 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key\") pod \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.687476 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rkwt\" (UniqueName: \"kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt\") pod \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.687670 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory\") pod \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\" (UID: \"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992\") " Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.692522 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt" (OuterVolumeSpecName: "kube-api-access-8rkwt") pod "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" (UID: "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992"). InnerVolumeSpecName "kube-api-access-8rkwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.719120 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" (UID: "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.721971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory" (OuterVolumeSpecName: "inventory") pod "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" (UID: "9a33fbd8-6d28-4cc6-b1f1-5d90c247f992"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.792895 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.792933 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:46 crc kubenswrapper[4853]: I1122 08:04:46.792946 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rkwt\" (UniqueName: \"kubernetes.io/projected/9a33fbd8-6d28-4cc6-b1f1-5d90c247f992-kube-api-access-8rkwt\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.008453 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" event={"ID":"9a33fbd8-6d28-4cc6-b1f1-5d90c247f992","Type":"ContainerDied","Data":"42e2d24461ea5ecee6cf9aa79686cb08a75f023abf8c2ac3f26716e8ca5b71cc"} Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.008495 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42e2d24461ea5ecee6cf9aa79686cb08a75f023abf8c2ac3f26716e8ca5b71cc" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.008557 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.089879 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-84bxn"] Nov 22 08:04:47 crc kubenswrapper[4853]: E1122 08:04:47.090521 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.090551 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.090852 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a33fbd8-6d28-4cc6-b1f1-5d90c247f992" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.091860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.094644 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.094798 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.094831 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.095519 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.103181 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-84bxn"] Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.202798 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.202964 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.205249 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2nj2\" (UniqueName: \"kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.307899 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.308238 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2nj2\" (UniqueName: \"kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.308408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.312008 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.314044 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.327017 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2nj2\" (UniqueName: \"kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2\") pod \"ssh-known-hosts-edpm-deployment-84bxn\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.414870 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:47 crc kubenswrapper[4853]: I1122 08:04:47.768845 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-84bxn"] Nov 22 08:04:48 crc kubenswrapper[4853]: I1122 08:04:48.020638 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" event={"ID":"e0ae265d-0731-4195-9f31-7bf77627fadd","Type":"ContainerStarted","Data":"df6f71889bf9da12f95e51dd100d4fdb4d42eb5a820771f33aa19d159baa94a5"} Nov 22 08:04:49 crc kubenswrapper[4853]: I1122 08:04:49.048049 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" event={"ID":"e0ae265d-0731-4195-9f31-7bf77627fadd","Type":"ContainerStarted","Data":"3c3eda6da52c946ed16cd2443cb68eb72fe937d86e4f964a1659084b06d20c7a"} Nov 22 08:04:49 crc kubenswrapper[4853]: I1122 08:04:49.080118 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" podStartSLOduration=1.708990453 podStartE2EDuration="2.080099764s" podCreationTimestamp="2025-11-22 08:04:47 +0000 UTC" firstStartedPulling="2025-11-22 08:04:47.775561906 +0000 UTC m=+3286.616184532" lastFinishedPulling="2025-11-22 08:04:48.146671217 +0000 UTC m=+3286.987293843" observedRunningTime="2025-11-22 08:04:49.072114548 +0000 UTC m=+3287.912737194" watchObservedRunningTime="2025-11-22 08:04:49.080099764 +0000 UTC m=+3287.920722390" Nov 22 08:04:53 crc kubenswrapper[4853]: I1122 08:04:53.750625 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:04:53 crc kubenswrapper[4853]: E1122 08:04:53.752411 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:04:55 crc kubenswrapper[4853]: I1122 08:04:55.110007 4853 generic.go:334] "Generic (PLEG): container finished" podID="e0ae265d-0731-4195-9f31-7bf77627fadd" containerID="3c3eda6da52c946ed16cd2443cb68eb72fe937d86e4f964a1659084b06d20c7a" exitCode=0 Nov 22 08:04:55 crc kubenswrapper[4853]: I1122 08:04:55.110135 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" event={"ID":"e0ae265d-0731-4195-9f31-7bf77627fadd","Type":"ContainerDied","Data":"3c3eda6da52c946ed16cd2443cb68eb72fe937d86e4f964a1659084b06d20c7a"} Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.601196 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.657708 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0\") pod \"e0ae265d-0731-4195-9f31-7bf77627fadd\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.657802 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2nj2\" (UniqueName: \"kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2\") pod \"e0ae265d-0731-4195-9f31-7bf77627fadd\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.666198 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2" (OuterVolumeSpecName: "kube-api-access-b2nj2") pod "e0ae265d-0731-4195-9f31-7bf77627fadd" (UID: "e0ae265d-0731-4195-9f31-7bf77627fadd"). InnerVolumeSpecName "kube-api-access-b2nj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.695550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e0ae265d-0731-4195-9f31-7bf77627fadd" (UID: "e0ae265d-0731-4195-9f31-7bf77627fadd"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.760196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam\") pod \"e0ae265d-0731-4195-9f31-7bf77627fadd\" (UID: \"e0ae265d-0731-4195-9f31-7bf77627fadd\") " Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.761370 4853 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.761412 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2nj2\" (UniqueName: \"kubernetes.io/projected/e0ae265d-0731-4195-9f31-7bf77627fadd-kube-api-access-b2nj2\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.789550 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e0ae265d-0731-4195-9f31-7bf77627fadd" (UID: "e0ae265d-0731-4195-9f31-7bf77627fadd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:04:56 crc kubenswrapper[4853]: I1122 08:04:56.862585 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e0ae265d-0731-4195-9f31-7bf77627fadd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.130961 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" event={"ID":"e0ae265d-0731-4195-9f31-7bf77627fadd","Type":"ContainerDied","Data":"df6f71889bf9da12f95e51dd100d4fdb4d42eb5a820771f33aa19d159baa94a5"} Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.131012 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df6f71889bf9da12f95e51dd100d4fdb4d42eb5a820771f33aa19d159baa94a5" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.131044 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-84bxn" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.211939 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md"] Nov 22 08:04:57 crc kubenswrapper[4853]: E1122 08:04:57.212602 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ae265d-0731-4195-9f31-7bf77627fadd" containerName="ssh-known-hosts-edpm-deployment" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.212625 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ae265d-0731-4195-9f31-7bf77627fadd" containerName="ssh-known-hosts-edpm-deployment" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.212902 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ae265d-0731-4195-9f31-7bf77627fadd" containerName="ssh-known-hosts-edpm-deployment" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.213875 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.215997 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.216000 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.216346 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.216821 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.226404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md"] Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.272011 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8nts\" (UniqueName: \"kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.272288 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.272349 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.376953 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.377689 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.377943 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8nts\" (UniqueName: \"kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.382284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.382841 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.395685 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8nts\" (UniqueName: \"kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-zc5md\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:57 crc kubenswrapper[4853]: I1122 08:04:57.537429 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:04:58 crc kubenswrapper[4853]: I1122 08:04:58.151695 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md"] Nov 22 08:04:59 crc kubenswrapper[4853]: I1122 08:04:59.154919 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" event={"ID":"1ae307fc-7b82-4d0c-8bdf-1af3c349634b","Type":"ContainerStarted","Data":"ee271f59473b1bc46361d8cdfdffd79748147eb79471c57ee186b3d85237d3fc"} Nov 22 08:04:59 crc kubenswrapper[4853]: I1122 08:04:59.155315 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" event={"ID":"1ae307fc-7b82-4d0c-8bdf-1af3c349634b","Type":"ContainerStarted","Data":"479daed734d9dc43c46b7f7c14f2f1ca11f6869530f893d2cf459e6daee63627"} Nov 22 08:05:02 crc kubenswrapper[4853]: I1122 08:05:02.571253 4853 scope.go:117] "RemoveContainer" containerID="2425633fa17944a5e7544c55faaf263fcb0cc2d659672a869344cc36058c1ef2" Nov 22 08:05:04 crc kubenswrapper[4853]: I1122 08:05:04.749250 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:05:04 crc kubenswrapper[4853]: E1122 08:05:04.750221 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:05:07 crc kubenswrapper[4853]: I1122 08:05:07.431265 4853 generic.go:334] "Generic (PLEG): container finished" podID="1ae307fc-7b82-4d0c-8bdf-1af3c349634b" containerID="ee271f59473b1bc46361d8cdfdffd79748147eb79471c57ee186b3d85237d3fc" exitCode=0 Nov 22 08:05:07 crc kubenswrapper[4853]: I1122 08:05:07.431340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" event={"ID":"1ae307fc-7b82-4d0c-8bdf-1af3c349634b","Type":"ContainerDied","Data":"ee271f59473b1bc46361d8cdfdffd79748147eb79471c57ee186b3d85237d3fc"} Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.906548 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.920445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory\") pod \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.920490 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key\") pod \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.920606 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8nts\" (UniqueName: \"kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts\") pod \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\" (UID: \"1ae307fc-7b82-4d0c-8bdf-1af3c349634b\") " Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.931048 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts" (OuterVolumeSpecName: "kube-api-access-z8nts") pod "1ae307fc-7b82-4d0c-8bdf-1af3c349634b" (UID: "1ae307fc-7b82-4d0c-8bdf-1af3c349634b"). InnerVolumeSpecName "kube-api-access-z8nts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.972004 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory" (OuterVolumeSpecName: "inventory") pod "1ae307fc-7b82-4d0c-8bdf-1af3c349634b" (UID: "1ae307fc-7b82-4d0c-8bdf-1af3c349634b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:05:08 crc kubenswrapper[4853]: I1122 08:05:08.972431 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ae307fc-7b82-4d0c-8bdf-1af3c349634b" (UID: "1ae307fc-7b82-4d0c-8bdf-1af3c349634b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.025180 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.027091 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.027146 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8nts\" (UniqueName: \"kubernetes.io/projected/1ae307fc-7b82-4d0c-8bdf-1af3c349634b-kube-api-access-z8nts\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.456506 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" event={"ID":"1ae307fc-7b82-4d0c-8bdf-1af3c349634b","Type":"ContainerDied","Data":"479daed734d9dc43c46b7f7c14f2f1ca11f6869530f893d2cf459e6daee63627"} Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.456965 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479daed734d9dc43c46b7f7c14f2f1ca11f6869530f893d2cf459e6daee63627" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.456627 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-zc5md" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.536935 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4"] Nov 22 08:05:09 crc kubenswrapper[4853]: E1122 08:05:09.538149 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae307fc-7b82-4d0c-8bdf-1af3c349634b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.538188 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae307fc-7b82-4d0c-8bdf-1af3c349634b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.538917 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae307fc-7b82-4d0c-8bdf-1af3c349634b" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.540616 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.546734 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.548314 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.550494 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.551355 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4"] Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.552076 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.643247 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.643668 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.644557 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxcwn\" (UniqueName: \"kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: E1122 08:05:09.713822 4853 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ae307fc_7b82_4d0c_8bdf_1af3c349634b.slice/crio-479daed734d9dc43c46b7f7c14f2f1ca11f6869530f893d2cf459e6daee63627\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ae307fc_7b82_4d0c_8bdf_1af3c349634b.slice\": RecentStats: unable to find data in memory cache]" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.747218 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxcwn\" (UniqueName: \"kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.747428 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.747536 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.753917 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.765284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.766119 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxcwn\" (UniqueName: \"kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:09 crc kubenswrapper[4853]: I1122 08:05:09.868450 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:10 crc kubenswrapper[4853]: I1122 08:05:10.412353 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4"] Nov 22 08:05:10 crc kubenswrapper[4853]: I1122 08:05:10.469968 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" event={"ID":"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3","Type":"ContainerStarted","Data":"4b1e7d31bb38bb7c138a665360421ff903d898a95b15df36f6fd58dfbb5a0220"} Nov 22 08:05:13 crc kubenswrapper[4853]: I1122 08:05:13.500972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" event={"ID":"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3","Type":"ContainerStarted","Data":"1c4711b98c4f1dd13fd712a2968b3cba872eb63f0ddb783b65efb234c95cb2fd"} Nov 22 08:05:13 crc kubenswrapper[4853]: I1122 08:05:13.544246 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" podStartSLOduration=3.096054953 podStartE2EDuration="4.544227609s" podCreationTimestamp="2025-11-22 08:05:09 +0000 UTC" firstStartedPulling="2025-11-22 08:05:10.424765632 +0000 UTC m=+3309.265388258" lastFinishedPulling="2025-11-22 08:05:11.872938278 +0000 UTC m=+3310.713560914" observedRunningTime="2025-11-22 08:05:13.534767155 +0000 UTC m=+3312.375389771" watchObservedRunningTime="2025-11-22 08:05:13.544227609 +0000 UTC m=+3312.384850235" Nov 22 08:05:19 crc kubenswrapper[4853]: I1122 08:05:19.748183 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:05:19 crc kubenswrapper[4853]: E1122 08:05:19.749094 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:05:23 crc kubenswrapper[4853]: I1122 08:05:23.615661 4853 generic.go:334] "Generic (PLEG): container finished" podID="bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" containerID="1c4711b98c4f1dd13fd712a2968b3cba872eb63f0ddb783b65efb234c95cb2fd" exitCode=0 Nov 22 08:05:23 crc kubenswrapper[4853]: I1122 08:05:23.615794 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" event={"ID":"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3","Type":"ContainerDied","Data":"1c4711b98c4f1dd13fd712a2968b3cba872eb63f0ddb783b65efb234c95cb2fd"} Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.070623 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.138145 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key\") pod \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.138625 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxcwn\" (UniqueName: \"kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn\") pod \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.139137 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory\") pod \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\" (UID: \"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3\") " Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.146662 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn" (OuterVolumeSpecName: "kube-api-access-dxcwn") pod "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" (UID: "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3"). InnerVolumeSpecName "kube-api-access-dxcwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.172594 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory" (OuterVolumeSpecName: "inventory") pod "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" (UID: "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.174813 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" (UID: "bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.243189 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.243234 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.243244 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxcwn\" (UniqueName: \"kubernetes.io/projected/bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3-kube-api-access-dxcwn\") on node \"crc\" DevicePath \"\"" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.637252 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" event={"ID":"bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3","Type":"ContainerDied","Data":"4b1e7d31bb38bb7c138a665360421ff903d898a95b15df36f6fd58dfbb5a0220"} Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.637294 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b1e7d31bb38bb7c138a665360421ff903d898a95b15df36f6fd58dfbb5a0220" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.637319 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.733289 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g"] Nov 22 08:05:25 crc kubenswrapper[4853]: E1122 08:05:25.733838 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.733858 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.734108 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.735030 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.737712 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.737974 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.738397 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.738812 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.739116 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.739437 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.739610 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.739820 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.741721 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.762946 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g"] Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.873862 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.873974 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874007 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874039 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874071 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874140 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgjpl\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874205 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874277 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874329 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874377 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874440 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874460 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874619 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874669 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.874712 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977400 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977471 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977520 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977637 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977694 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977725 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.977970 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978362 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978410 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978432 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978460 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978488 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978618 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgjpl\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.978657 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.982714 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.982923 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.982966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.982966 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.983592 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.983945 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.984313 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.984492 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.984509 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.985262 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.985519 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.986161 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.987319 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.988083 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.990855 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:25 crc kubenswrapper[4853]: I1122 08:05:25.999039 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgjpl\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-mg62g\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:26 crc kubenswrapper[4853]: I1122 08:05:26.070374 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:05:26 crc kubenswrapper[4853]: I1122 08:05:26.651658 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g"] Nov 22 08:05:27 crc kubenswrapper[4853]: I1122 08:05:27.670110 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" event={"ID":"48806bf3-8709-441a-bf45-7a89c6ce9b32","Type":"ContainerStarted","Data":"c53b0077b8618607bf1c3ad357fc6eca5d7f3a88e425d4aecd8e3592596cda83"} Nov 22 08:05:29 crc kubenswrapper[4853]: I1122 08:05:29.691972 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" event={"ID":"48806bf3-8709-441a-bf45-7a89c6ce9b32","Type":"ContainerStarted","Data":"826e1fe05d54dabd563c57cfda666e39baea14238f749b3d7d6b6b645effc44d"} Nov 22 08:05:29 crc kubenswrapper[4853]: I1122 08:05:29.723386 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" podStartSLOduration=2.136960767 podStartE2EDuration="4.72336346s" podCreationTimestamp="2025-11-22 08:05:25 +0000 UTC" firstStartedPulling="2025-11-22 08:05:26.657103039 +0000 UTC m=+3325.497725665" lastFinishedPulling="2025-11-22 08:05:29.243505732 +0000 UTC m=+3328.084128358" observedRunningTime="2025-11-22 08:05:29.711429637 +0000 UTC m=+3328.552052273" watchObservedRunningTime="2025-11-22 08:05:29.72336346 +0000 UTC m=+3328.563986086" Nov 22 08:05:30 crc kubenswrapper[4853]: I1122 08:05:30.750006 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:05:30 crc kubenswrapper[4853]: E1122 08:05:30.750745 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:05:43 crc kubenswrapper[4853]: I1122 08:05:43.748029 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:05:43 crc kubenswrapper[4853]: E1122 08:05:43.748880 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:05:57 crc kubenswrapper[4853]: I1122 08:05:57.748580 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:05:57 crc kubenswrapper[4853]: E1122 08:05:57.749529 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:06:10 crc kubenswrapper[4853]: I1122 08:06:10.748068 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:06:10 crc kubenswrapper[4853]: E1122 08:06:10.748962 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:06:14 crc kubenswrapper[4853]: I1122 08:06:14.211393 4853 generic.go:334] "Generic (PLEG): container finished" podID="48806bf3-8709-441a-bf45-7a89c6ce9b32" containerID="826e1fe05d54dabd563c57cfda666e39baea14238f749b3d7d6b6b645effc44d" exitCode=0 Nov 22 08:06:14 crc kubenswrapper[4853]: I1122 08:06:14.211489 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" event={"ID":"48806bf3-8709-441a-bf45-7a89c6ce9b32","Type":"ContainerDied","Data":"826e1fe05d54dabd563c57cfda666e39baea14238f749b3d7d6b6b645effc44d"} Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.719187 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.830709 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.830913 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831008 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831168 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831219 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831244 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831270 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgjpl\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831333 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831359 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831400 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831455 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831525 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831583 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831631 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.831677 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle\") pod \"48806bf3-8709-441a-bf45-7a89c6ce9b32\" (UID: \"48806bf3-8709-441a-bf45-7a89c6ce9b32\") " Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.838572 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.839072 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.839393 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.839691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.839964 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl" (OuterVolumeSpecName: "kube-api-access-kgjpl") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "kube-api-access-kgjpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.841347 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.841913 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.842069 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.842513 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.842553 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.843500 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.844373 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.844953 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.845683 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.880682 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.881999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory" (OuterVolumeSpecName: "inventory") pod "48806bf3-8709-441a-bf45-7a89c6ce9b32" (UID: "48806bf3-8709-441a-bf45-7a89c6ce9b32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.936066 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.936987 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937116 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937141 4853 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937161 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937183 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937198 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937219 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937234 4853 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937248 4853 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937268 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937282 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937293 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937307 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48806bf3-8709-441a-bf45-7a89c6ce9b32-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937324 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:15 crc kubenswrapper[4853]: I1122 08:06:15.937425 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgjpl\" (UniqueName: \"kubernetes.io/projected/48806bf3-8709-441a-bf45-7a89c6ce9b32-kube-api-access-kgjpl\") on node \"crc\" DevicePath \"\"" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.234022 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" event={"ID":"48806bf3-8709-441a-bf45-7a89c6ce9b32","Type":"ContainerDied","Data":"c53b0077b8618607bf1c3ad357fc6eca5d7f3a88e425d4aecd8e3592596cda83"} Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.234070 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53b0077b8618607bf1c3ad357fc6eca5d7f3a88e425d4aecd8e3592596cda83" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.234091 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-mg62g" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.337067 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r"] Nov 22 08:06:16 crc kubenswrapper[4853]: E1122 08:06:16.337687 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48806bf3-8709-441a-bf45-7a89c6ce9b32" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.337710 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="48806bf3-8709-441a-bf45-7a89c6ce9b32" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.337971 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="48806bf3-8709-441a-bf45-7a89c6ce9b32" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.338907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.341698 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.341996 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.342017 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.342027 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.344489 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7zrx\" (UniqueName: \"kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.344531 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.344833 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.345306 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.345483 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.348952 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r"] Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.351304 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.448197 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.448365 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.448399 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.448477 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7zrx\" (UniqueName: \"kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.448501 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.450268 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.453062 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.464279 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.464352 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.470184 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7zrx\" (UniqueName: \"kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-8hs7r\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:16 crc kubenswrapper[4853]: I1122 08:06:16.662250 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:06:17 crc kubenswrapper[4853]: I1122 08:06:17.213133 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r"] Nov 22 08:06:17 crc kubenswrapper[4853]: I1122 08:06:17.246842 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" event={"ID":"27dca404-f54c-4f96-9ae3-e517c2de3033","Type":"ContainerStarted","Data":"59808f26bab0a7c8b5b9a95398c8ae0cb72ad24a39050be1c9d8425b131e1faf"} Nov 22 08:06:19 crc kubenswrapper[4853]: I1122 08:06:19.272883 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" event={"ID":"27dca404-f54c-4f96-9ae3-e517c2de3033","Type":"ContainerStarted","Data":"0a76a29e4f36405629ca52c1fcaf3ecd1da3f4757c419ea5c4da15c476d9c7e8"} Nov 22 08:06:19 crc kubenswrapper[4853]: I1122 08:06:19.296916 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" podStartSLOduration=2.315401104 podStartE2EDuration="3.296874517s" podCreationTimestamp="2025-11-22 08:06:16 +0000 UTC" firstStartedPulling="2025-11-22 08:06:17.218229525 +0000 UTC m=+3376.058852151" lastFinishedPulling="2025-11-22 08:06:18.199702938 +0000 UTC m=+3377.040325564" observedRunningTime="2025-11-22 08:06:19.288436219 +0000 UTC m=+3378.129058855" watchObservedRunningTime="2025-11-22 08:06:19.296874517 +0000 UTC m=+3378.137497153" Nov 22 08:06:24 crc kubenswrapper[4853]: I1122 08:06:24.748273 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:06:24 crc kubenswrapper[4853]: E1122 08:06:24.749155 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:06:38 crc kubenswrapper[4853]: I1122 08:06:38.748429 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:06:38 crc kubenswrapper[4853]: E1122 08:06:38.749221 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:06:49 crc kubenswrapper[4853]: I1122 08:06:49.748530 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:06:49 crc kubenswrapper[4853]: E1122 08:06:49.749577 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:07:02 crc kubenswrapper[4853]: I1122 08:07:02.747430 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:07:02 crc kubenswrapper[4853]: E1122 08:07:02.748290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.267141 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.270392 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.288659 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.410615 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.410729 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.410899 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkmv\" (UniqueName: \"kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.514357 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.514461 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nkmv\" (UniqueName: \"kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.514666 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.514922 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.515194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.536502 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nkmv\" (UniqueName: \"kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv\") pod \"redhat-marketplace-r7bgk\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:05 crc kubenswrapper[4853]: I1122 08:07:05.592564 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:06 crc kubenswrapper[4853]: I1122 08:07:06.227553 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:06 crc kubenswrapper[4853]: W1122 08:07:06.257406 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49f2dc6c_a0a4_4f33_845b_d149d0fae0a8.slice/crio-a5c29dc47223ccd5874095df6180b4ffe5f94936358b08449fd2e0f2dab35a19 WatchSource:0}: Error finding container a5c29dc47223ccd5874095df6180b4ffe5f94936358b08449fd2e0f2dab35a19: Status 404 returned error can't find the container with id a5c29dc47223ccd5874095df6180b4ffe5f94936358b08449fd2e0f2dab35a19 Nov 22 08:07:06 crc kubenswrapper[4853]: I1122 08:07:06.811492 4853 generic.go:334] "Generic (PLEG): container finished" podID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerID="69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9" exitCode=0 Nov 22 08:07:06 crc kubenswrapper[4853]: I1122 08:07:06.811549 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerDied","Data":"69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9"} Nov 22 08:07:06 crc kubenswrapper[4853]: I1122 08:07:06.811580 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerStarted","Data":"a5c29dc47223ccd5874095df6180b4ffe5f94936358b08449fd2e0f2dab35a19"} Nov 22 08:07:06 crc kubenswrapper[4853]: I1122 08:07:06.814315 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:07:08 crc kubenswrapper[4853]: I1122 08:07:08.833724 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerStarted","Data":"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831"} Nov 22 08:07:09 crc kubenswrapper[4853]: I1122 08:07:09.848853 4853 generic.go:334] "Generic (PLEG): container finished" podID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerID="5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831" exitCode=0 Nov 22 08:07:09 crc kubenswrapper[4853]: I1122 08:07:09.848966 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerDied","Data":"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831"} Nov 22 08:07:10 crc kubenswrapper[4853]: I1122 08:07:10.875684 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerStarted","Data":"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9"} Nov 22 08:07:10 crc kubenswrapper[4853]: I1122 08:07:10.896025 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r7bgk" podStartSLOduration=2.418966471 podStartE2EDuration="5.896003514s" podCreationTimestamp="2025-11-22 08:07:05 +0000 UTC" firstStartedPulling="2025-11-22 08:07:06.814049356 +0000 UTC m=+3425.654672002" lastFinishedPulling="2025-11-22 08:07:10.291086419 +0000 UTC m=+3429.131709045" observedRunningTime="2025-11-22 08:07:10.891668358 +0000 UTC m=+3429.732290984" watchObservedRunningTime="2025-11-22 08:07:10.896003514 +0000 UTC m=+3429.736626140" Nov 22 08:07:15 crc kubenswrapper[4853]: I1122 08:07:15.593419 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:15 crc kubenswrapper[4853]: I1122 08:07:15.593829 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:15 crc kubenswrapper[4853]: I1122 08:07:15.643818 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:15 crc kubenswrapper[4853]: I1122 08:07:15.979640 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:16 crc kubenswrapper[4853]: I1122 08:07:16.032225 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:16 crc kubenswrapper[4853]: I1122 08:07:16.747974 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:07:16 crc kubenswrapper[4853]: E1122 08:07:16.748408 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:07:17 crc kubenswrapper[4853]: I1122 08:07:17.955937 4853 generic.go:334] "Generic (PLEG): container finished" podID="27dca404-f54c-4f96-9ae3-e517c2de3033" containerID="0a76a29e4f36405629ca52c1fcaf3ecd1da3f4757c419ea5c4da15c476d9c7e8" exitCode=0 Nov 22 08:07:17 crc kubenswrapper[4853]: I1122 08:07:17.956021 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" event={"ID":"27dca404-f54c-4f96-9ae3-e517c2de3033","Type":"ContainerDied","Data":"0a76a29e4f36405629ca52c1fcaf3ecd1da3f4757c419ea5c4da15c476d9c7e8"} Nov 22 08:07:17 crc kubenswrapper[4853]: I1122 08:07:17.956676 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r7bgk" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="registry-server" containerID="cri-o://1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9" gracePeriod=2 Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.484342 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.541185 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nkmv\" (UniqueName: \"kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv\") pod \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.541285 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content\") pod \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.541446 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities\") pod \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\" (UID: \"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8\") " Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.542222 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities" (OuterVolumeSpecName: "utilities") pod "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" (UID: "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.542573 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.551081 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv" (OuterVolumeSpecName: "kube-api-access-4nkmv") pod "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" (UID: "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8"). InnerVolumeSpecName "kube-api-access-4nkmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.560203 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" (UID: "49f2dc6c-a0a4-4f33-845b-d149d0fae0a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.646867 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nkmv\" (UniqueName: \"kubernetes.io/projected/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-kube-api-access-4nkmv\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:18 crc kubenswrapper[4853]: I1122 08:07:18.646901 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.039991 4853 generic.go:334] "Generic (PLEG): container finished" podID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerID="1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9" exitCode=0 Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.040235 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r7bgk" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.042336 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerDied","Data":"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9"} Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.042425 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r7bgk" event={"ID":"49f2dc6c-a0a4-4f33-845b-d149d0fae0a8","Type":"ContainerDied","Data":"a5c29dc47223ccd5874095df6180b4ffe5f94936358b08449fd2e0f2dab35a19"} Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.042445 4853 scope.go:117] "RemoveContainer" containerID="1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.096953 4853 scope.go:117] "RemoveContainer" containerID="5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.100903 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.121155 4853 scope.go:117] "RemoveContainer" containerID="69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.132497 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r7bgk"] Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.184101 4853 scope.go:117] "RemoveContainer" containerID="1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9" Nov 22 08:07:19 crc kubenswrapper[4853]: E1122 08:07:19.185127 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9\": container with ID starting with 1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9 not found: ID does not exist" containerID="1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.185181 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9"} err="failed to get container status \"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9\": rpc error: code = NotFound desc = could not find container \"1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9\": container with ID starting with 1d2c24faa407f39394800604c15f61091c9ee0d4b06e5d3bc98c0d2fbdd585d9 not found: ID does not exist" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.185214 4853 scope.go:117] "RemoveContainer" containerID="5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831" Nov 22 08:07:19 crc kubenswrapper[4853]: E1122 08:07:19.185736 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831\": container with ID starting with 5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831 not found: ID does not exist" containerID="5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.185802 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831"} err="failed to get container status \"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831\": rpc error: code = NotFound desc = could not find container \"5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831\": container with ID starting with 5b98cf6569db086b8c1dd2cda068b7048e77055d2c9c4af72d752a969ca1e831 not found: ID does not exist" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.185834 4853 scope.go:117] "RemoveContainer" containerID="69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9" Nov 22 08:07:19 crc kubenswrapper[4853]: E1122 08:07:19.186170 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9\": container with ID starting with 69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9 not found: ID does not exist" containerID="69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.186198 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9"} err="failed to get container status \"69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9\": rpc error: code = NotFound desc = could not find container \"69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9\": container with ID starting with 69e235ab5f49e5ffd28692909c020f041df0ebf8bad618034bf8a5ae7ab9dda9 not found: ID does not exist" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.666131 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.764035 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" path="/var/lib/kubelet/pods/49f2dc6c-a0a4-4f33-845b-d149d0fae0a8/volumes" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.782798 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle\") pod \"27dca404-f54c-4f96-9ae3-e517c2de3033\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.783014 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0\") pod \"27dca404-f54c-4f96-9ae3-e517c2de3033\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.783064 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key\") pod \"27dca404-f54c-4f96-9ae3-e517c2de3033\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.783172 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory\") pod \"27dca404-f54c-4f96-9ae3-e517c2de3033\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.783241 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7zrx\" (UniqueName: \"kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx\") pod \"27dca404-f54c-4f96-9ae3-e517c2de3033\" (UID: \"27dca404-f54c-4f96-9ae3-e517c2de3033\") " Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.789245 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx" (OuterVolumeSpecName: "kube-api-access-f7zrx") pod "27dca404-f54c-4f96-9ae3-e517c2de3033" (UID: "27dca404-f54c-4f96-9ae3-e517c2de3033"). InnerVolumeSpecName "kube-api-access-f7zrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.791586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "27dca404-f54c-4f96-9ae3-e517c2de3033" (UID: "27dca404-f54c-4f96-9ae3-e517c2de3033"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.816883 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory" (OuterVolumeSpecName: "inventory") pod "27dca404-f54c-4f96-9ae3-e517c2de3033" (UID: "27dca404-f54c-4f96-9ae3-e517c2de3033"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.819236 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "27dca404-f54c-4f96-9ae3-e517c2de3033" (UID: "27dca404-f54c-4f96-9ae3-e517c2de3033"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.819903 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "27dca404-f54c-4f96-9ae3-e517c2de3033" (UID: "27dca404-f54c-4f96-9ae3-e517c2de3033"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.886821 4853 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.886856 4853 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/27dca404-f54c-4f96-9ae3-e517c2de3033-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.886867 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.886880 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27dca404-f54c-4f96-9ae3-e517c2de3033-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:19 crc kubenswrapper[4853]: I1122 08:07:19.886890 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7zrx\" (UniqueName: \"kubernetes.io/projected/27dca404-f54c-4f96-9ae3-e517c2de3033-kube-api-access-f7zrx\") on node \"crc\" DevicePath \"\"" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.074902 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" event={"ID":"27dca404-f54c-4f96-9ae3-e517c2de3033","Type":"ContainerDied","Data":"59808f26bab0a7c8b5b9a95398c8ae0cb72ad24a39050be1c9d8425b131e1faf"} Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.075295 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59808f26bab0a7c8b5b9a95398c8ae0cb72ad24a39050be1c9d8425b131e1faf" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.075029 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-8hs7r" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.157282 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2"] Nov 22 08:07:20 crc kubenswrapper[4853]: E1122 08:07:20.158902 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="registry-server" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.158954 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="registry-server" Nov 22 08:07:20 crc kubenswrapper[4853]: E1122 08:07:20.158981 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="extract-content" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.158990 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="extract-content" Nov 22 08:07:20 crc kubenswrapper[4853]: E1122 08:07:20.159047 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27dca404-f54c-4f96-9ae3-e517c2de3033" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.159055 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="27dca404-f54c-4f96-9ae3-e517c2de3033" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 22 08:07:20 crc kubenswrapper[4853]: E1122 08:07:20.159091 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="extract-utilities" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.159099 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="extract-utilities" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.159508 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f2dc6c-a0a4-4f33-845b-d149d0fae0a8" containerName="registry-server" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.159564 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="27dca404-f54c-4f96-9ae3-e517c2de3033" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.164145 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.166629 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.167057 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.167132 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.167265 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.167554 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.168920 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.196404 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2"] Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.201933 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.202161 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.202251 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwsq\" (UniqueName: \"kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.202328 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.202575 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.202716 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305168 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305243 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305263 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwsq\" (UniqueName: \"kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305283 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.305395 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.310026 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.310047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.310525 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.310673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.310979 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.329416 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwsq\" (UniqueName: \"kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:20 crc kubenswrapper[4853]: I1122 08:07:20.507159 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:07:21 crc kubenswrapper[4853]: I1122 08:07:21.055433 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2"] Nov 22 08:07:21 crc kubenswrapper[4853]: I1122 08:07:21.088501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" event={"ID":"988b3ef5-b991-4375-870a-67b6f2beaeac","Type":"ContainerStarted","Data":"2671e47151dac70f7e84075acae95057b9ec596f1eb680d7ef64c82c2191f225"} Nov 22 08:07:22 crc kubenswrapper[4853]: I1122 08:07:22.100225 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" event={"ID":"988b3ef5-b991-4375-870a-67b6f2beaeac","Type":"ContainerStarted","Data":"ca07998976ee23d759355500718111b01c552326ef3f67fa1bb6f61925a6ad46"} Nov 22 08:07:22 crc kubenswrapper[4853]: I1122 08:07:22.131964 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" podStartSLOduration=1.705017499 podStartE2EDuration="2.131943028s" podCreationTimestamp="2025-11-22 08:07:20 +0000 UTC" firstStartedPulling="2025-11-22 08:07:21.057433092 +0000 UTC m=+3439.898055718" lastFinishedPulling="2025-11-22 08:07:21.484358611 +0000 UTC m=+3440.324981247" observedRunningTime="2025-11-22 08:07:22.121386733 +0000 UTC m=+3440.962009369" watchObservedRunningTime="2025-11-22 08:07:22.131943028 +0000 UTC m=+3440.972565664" Nov 22 08:07:31 crc kubenswrapper[4853]: I1122 08:07:31.748138 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:07:31 crc kubenswrapper[4853]: E1122 08:07:31.749050 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:07:44 crc kubenswrapper[4853]: I1122 08:07:44.748541 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:07:44 crc kubenswrapper[4853]: E1122 08:07:44.749212 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:07:55 crc kubenswrapper[4853]: I1122 08:07:55.771326 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:07:55 crc kubenswrapper[4853]: E1122 08:07:55.772662 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:08:08 crc kubenswrapper[4853]: I1122 08:08:08.672215 4853 generic.go:334] "Generic (PLEG): container finished" podID="988b3ef5-b991-4375-870a-67b6f2beaeac" containerID="ca07998976ee23d759355500718111b01c552326ef3f67fa1bb6f61925a6ad46" exitCode=0 Nov 22 08:08:08 crc kubenswrapper[4853]: I1122 08:08:08.672287 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" event={"ID":"988b3ef5-b991-4375-870a-67b6f2beaeac","Type":"ContainerDied","Data":"ca07998976ee23d759355500718111b01c552326ef3f67fa1bb6f61925a6ad46"} Nov 22 08:08:08 crc kubenswrapper[4853]: I1122 08:08:08.748380 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:08:08 crc kubenswrapper[4853]: E1122 08:08:08.748998 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.159598 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206374 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206458 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206508 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206739 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206806 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.206864 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwsq\" (UniqueName: \"kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq\") pod \"988b3ef5-b991-4375-870a-67b6f2beaeac\" (UID: \"988b3ef5-b991-4375-870a-67b6f2beaeac\") " Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.220032 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.230201 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq" (OuterVolumeSpecName: "kube-api-access-xtwsq") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "kube-api-access-xtwsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.247310 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.249325 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.253968 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory" (OuterVolumeSpecName: "inventory") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.256889 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "988b3ef5-b991-4375-870a-67b6f2beaeac" (UID: "988b3ef5-b991-4375-870a-67b6f2beaeac"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309649 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309688 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309704 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwsq\" (UniqueName: \"kubernetes.io/projected/988b3ef5-b991-4375-870a-67b6f2beaeac-kube-api-access-xtwsq\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309714 4853 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309728 4853 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.309738 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/988b3ef5-b991-4375-870a-67b6f2beaeac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.695174 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" event={"ID":"988b3ef5-b991-4375-870a-67b6f2beaeac","Type":"ContainerDied","Data":"2671e47151dac70f7e84075acae95057b9ec596f1eb680d7ef64c82c2191f225"} Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.695585 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2671e47151dac70f7e84075acae95057b9ec596f1eb680d7ef64c82c2191f225" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.695217 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.825127 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz"] Nov 22 08:08:10 crc kubenswrapper[4853]: E1122 08:08:10.828102 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988b3ef5-b991-4375-870a-67b6f2beaeac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.828128 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="988b3ef5-b991-4375-870a-67b6f2beaeac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.829118 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="988b3ef5-b991-4375-870a-67b6f2beaeac" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.831220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.833174 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.833364 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.836776 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.839209 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.839883 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.848836 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz"] Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.946723 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.947217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.947352 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.947566 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzgbh\" (UniqueName: \"kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:10 crc kubenswrapper[4853]: I1122 08:08:10.947904 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.050273 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.050386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.050417 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.050437 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.050516 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzgbh\" (UniqueName: \"kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.055194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.055204 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.056655 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.057139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.071357 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzgbh\" (UniqueName: \"kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.177341 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:08:11 crc kubenswrapper[4853]: I1122 08:08:11.730684 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz"] Nov 22 08:08:12 crc kubenswrapper[4853]: I1122 08:08:12.718710 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" event={"ID":"c7e358cd-fbd6-411d-9231-73e533bbda3b","Type":"ContainerStarted","Data":"54765e929e21fb2795af31a2817ffc7dd25f7a6be3f25750f2ebf84aa29ac981"} Nov 22 08:08:12 crc kubenswrapper[4853]: I1122 08:08:12.718811 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" event={"ID":"c7e358cd-fbd6-411d-9231-73e533bbda3b","Type":"ContainerStarted","Data":"863e24649b803d676a3e776f464e1ca28f99f59d8fe3e2850bfbea25b091e157"} Nov 22 08:08:20 crc kubenswrapper[4853]: I1122 08:08:20.748155 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:08:20 crc kubenswrapper[4853]: E1122 08:08:20.749064 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:08:35 crc kubenswrapper[4853]: I1122 08:08:35.757368 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:08:36 crc kubenswrapper[4853]: I1122 08:08:36.019934 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5"} Nov 22 08:08:36 crc kubenswrapper[4853]: I1122 08:08:36.046482 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" podStartSLOduration=25.638990877 podStartE2EDuration="26.04646157s" podCreationTimestamp="2025-11-22 08:08:10 +0000 UTC" firstStartedPulling="2025-11-22 08:08:11.732787255 +0000 UTC m=+3490.573409881" lastFinishedPulling="2025-11-22 08:08:12.140257948 +0000 UTC m=+3490.980880574" observedRunningTime="2025-11-22 08:08:12.742645205 +0000 UTC m=+3491.583267831" watchObservedRunningTime="2025-11-22 08:08:36.04646157 +0000 UTC m=+3514.887084196" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.387642 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.390934 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.437922 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.571439 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.571871 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.571946 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvgjp\" (UniqueName: \"kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.675830 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.676006 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.676465 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.676407 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.676526 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvgjp\" (UniqueName: \"kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.697375 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvgjp\" (UniqueName: \"kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp\") pod \"redhat-operators-8b2kt\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:07 crc kubenswrapper[4853]: I1122 08:10:07.724637 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:08 crc kubenswrapper[4853]: I1122 08:10:08.335251 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:09 crc kubenswrapper[4853]: I1122 08:10:09.169202 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerID="761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7" exitCode=0 Nov 22 08:10:09 crc kubenswrapper[4853]: I1122 08:10:09.169289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerDied","Data":"761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7"} Nov 22 08:10:09 crc kubenswrapper[4853]: I1122 08:10:09.169731 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerStarted","Data":"72f2b94f2b37e258d77a8aecb3f9d6508942456db77043d8066dd938b986a562"} Nov 22 08:10:11 crc kubenswrapper[4853]: I1122 08:10:11.263654 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerStarted","Data":"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8"} Nov 22 08:10:17 crc kubenswrapper[4853]: I1122 08:10:17.341836 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerID="686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8" exitCode=0 Nov 22 08:10:17 crc kubenswrapper[4853]: I1122 08:10:17.341928 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerDied","Data":"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8"} Nov 22 08:10:18 crc kubenswrapper[4853]: I1122 08:10:18.357414 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerStarted","Data":"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445"} Nov 22 08:10:18 crc kubenswrapper[4853]: I1122 08:10:18.385999 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8b2kt" podStartSLOduration=2.794599354 podStartE2EDuration="11.385979009s" podCreationTimestamp="2025-11-22 08:10:07 +0000 UTC" firstStartedPulling="2025-11-22 08:10:09.172030775 +0000 UTC m=+3608.012653401" lastFinishedPulling="2025-11-22 08:10:17.76341043 +0000 UTC m=+3616.604033056" observedRunningTime="2025-11-22 08:10:18.375707074 +0000 UTC m=+3617.216329710" watchObservedRunningTime="2025-11-22 08:10:18.385979009 +0000 UTC m=+3617.226601635" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.713230 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.716609 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.724556 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.835908 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.836215 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.836429 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcp4m\" (UniqueName: \"kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.939007 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcp4m\" (UniqueName: \"kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.939350 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.939389 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.940047 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.940136 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:25 crc kubenswrapper[4853]: I1122 08:10:25.959078 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcp4m\" (UniqueName: \"kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m\") pod \"community-operators-78ph4\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:26 crc kubenswrapper[4853]: I1122 08:10:26.103975 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:26 crc kubenswrapper[4853]: W1122 08:10:26.693761 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b069d5_119a_4e87_b697_86e11053ec1b.slice/crio-184dcb12430e1f0c2d2edd372ecdbd5bd48ed8db0281c46baf43e769c874c0c5 WatchSource:0}: Error finding container 184dcb12430e1f0c2d2edd372ecdbd5bd48ed8db0281c46baf43e769c874c0c5: Status 404 returned error can't find the container with id 184dcb12430e1f0c2d2edd372ecdbd5bd48ed8db0281c46baf43e769c874c0c5 Nov 22 08:10:26 crc kubenswrapper[4853]: I1122 08:10:26.700421 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:27 crc kubenswrapper[4853]: I1122 08:10:27.472139 4853 generic.go:334] "Generic (PLEG): container finished" podID="62b069d5-119a-4e87-b697-86e11053ec1b" containerID="f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6" exitCode=0 Nov 22 08:10:27 crc kubenswrapper[4853]: I1122 08:10:27.472203 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerDied","Data":"f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6"} Nov 22 08:10:27 crc kubenswrapper[4853]: I1122 08:10:27.472691 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerStarted","Data":"184dcb12430e1f0c2d2edd372ecdbd5bd48ed8db0281c46baf43e769c874c0c5"} Nov 22 08:10:27 crc kubenswrapper[4853]: I1122 08:10:27.725043 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:27 crc kubenswrapper[4853]: I1122 08:10:27.725101 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:28 crc kubenswrapper[4853]: I1122 08:10:28.487033 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerStarted","Data":"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118"} Nov 22 08:10:28 crc kubenswrapper[4853]: I1122 08:10:28.785969 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b2kt" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" probeResult="failure" output=< Nov 22 08:10:28 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:10:28 crc kubenswrapper[4853]: > Nov 22 08:10:32 crc kubenswrapper[4853]: I1122 08:10:32.532887 4853 generic.go:334] "Generic (PLEG): container finished" podID="62b069d5-119a-4e87-b697-86e11053ec1b" containerID="d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118" exitCode=0 Nov 22 08:10:32 crc kubenswrapper[4853]: I1122 08:10:32.533104 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerDied","Data":"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118"} Nov 22 08:10:33 crc kubenswrapper[4853]: I1122 08:10:33.546428 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerStarted","Data":"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea"} Nov 22 08:10:33 crc kubenswrapper[4853]: I1122 08:10:33.567780 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-78ph4" podStartSLOduration=2.8656058030000002 podStartE2EDuration="8.567746347s" podCreationTimestamp="2025-11-22 08:10:25 +0000 UTC" firstStartedPulling="2025-11-22 08:10:27.476095959 +0000 UTC m=+3626.316718605" lastFinishedPulling="2025-11-22 08:10:33.178236523 +0000 UTC m=+3632.018859149" observedRunningTime="2025-11-22 08:10:33.566982037 +0000 UTC m=+3632.407604683" watchObservedRunningTime="2025-11-22 08:10:33.567746347 +0000 UTC m=+3632.408368973" Nov 22 08:10:36 crc kubenswrapper[4853]: I1122 08:10:36.105442 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:36 crc kubenswrapper[4853]: I1122 08:10:36.105916 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:37 crc kubenswrapper[4853]: I1122 08:10:37.156052 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-78ph4" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="registry-server" probeResult="failure" output=< Nov 22 08:10:37 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:10:37 crc kubenswrapper[4853]: > Nov 22 08:10:38 crc kubenswrapper[4853]: I1122 08:10:38.775973 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b2kt" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" probeResult="failure" output=< Nov 22 08:10:38 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:10:38 crc kubenswrapper[4853]: > Nov 22 08:10:46 crc kubenswrapper[4853]: I1122 08:10:46.162730 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:46 crc kubenswrapper[4853]: I1122 08:10:46.233692 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:46 crc kubenswrapper[4853]: I1122 08:10:46.405463 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:47 crc kubenswrapper[4853]: I1122 08:10:47.721508 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-78ph4" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="registry-server" containerID="cri-o://31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea" gracePeriod=2 Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.300784 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.370073 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content\") pod \"62b069d5-119a-4e87-b697-86e11053ec1b\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.370192 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcp4m\" (UniqueName: \"kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m\") pod \"62b069d5-119a-4e87-b697-86e11053ec1b\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.370359 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities\") pod \"62b069d5-119a-4e87-b697-86e11053ec1b\" (UID: \"62b069d5-119a-4e87-b697-86e11053ec1b\") " Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.371366 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities" (OuterVolumeSpecName: "utilities") pod "62b069d5-119a-4e87-b697-86e11053ec1b" (UID: "62b069d5-119a-4e87-b697-86e11053ec1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.380282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m" (OuterVolumeSpecName: "kube-api-access-zcp4m") pod "62b069d5-119a-4e87-b697-86e11053ec1b" (UID: "62b069d5-119a-4e87-b697-86e11053ec1b"). InnerVolumeSpecName "kube-api-access-zcp4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.432373 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62b069d5-119a-4e87-b697-86e11053ec1b" (UID: "62b069d5-119a-4e87-b697-86e11053ec1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.472480 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.472529 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcp4m\" (UniqueName: \"kubernetes.io/projected/62b069d5-119a-4e87-b697-86e11053ec1b-kube-api-access-zcp4m\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.472541 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62b069d5-119a-4e87-b697-86e11053ec1b-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.738517 4853 generic.go:334] "Generic (PLEG): container finished" podID="62b069d5-119a-4e87-b697-86e11053ec1b" containerID="31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea" exitCode=0 Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.738570 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerDied","Data":"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea"} Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.738591 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-78ph4" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.738609 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-78ph4" event={"ID":"62b069d5-119a-4e87-b697-86e11053ec1b","Type":"ContainerDied","Data":"184dcb12430e1f0c2d2edd372ecdbd5bd48ed8db0281c46baf43e769c874c0c5"} Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.738639 4853 scope.go:117] "RemoveContainer" containerID="31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.776591 4853 scope.go:117] "RemoveContainer" containerID="d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.783833 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8b2kt" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" probeResult="failure" output=< Nov 22 08:10:48 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:10:48 crc kubenswrapper[4853]: > Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.783976 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.798742 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-78ph4"] Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.842480 4853 scope.go:117] "RemoveContainer" containerID="f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.884607 4853 scope.go:117] "RemoveContainer" containerID="31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea" Nov 22 08:10:48 crc kubenswrapper[4853]: E1122 08:10:48.888979 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea\": container with ID starting with 31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea not found: ID does not exist" containerID="31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.889055 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea"} err="failed to get container status \"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea\": rpc error: code = NotFound desc = could not find container \"31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea\": container with ID starting with 31e87e6fefbcda201db100aaaec1989715ce578b7d87d1d5ebc00b2a6ff7d9ea not found: ID does not exist" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.889099 4853 scope.go:117] "RemoveContainer" containerID="d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118" Nov 22 08:10:48 crc kubenswrapper[4853]: E1122 08:10:48.890024 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118\": container with ID starting with d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118 not found: ID does not exist" containerID="d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.890095 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118"} err="failed to get container status \"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118\": rpc error: code = NotFound desc = could not find container \"d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118\": container with ID starting with d4c206e1687656026a7f707f019f8aca25904ff37186107c0ce3a0c4cb009118 not found: ID does not exist" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.890129 4853 scope.go:117] "RemoveContainer" containerID="f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6" Nov 22 08:10:48 crc kubenswrapper[4853]: E1122 08:10:48.890827 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6\": container with ID starting with f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6 not found: ID does not exist" containerID="f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6" Nov 22 08:10:48 crc kubenswrapper[4853]: I1122 08:10:48.890862 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6"} err="failed to get container status \"f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6\": rpc error: code = NotFound desc = could not find container \"f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6\": container with ID starting with f8162621434111900119f4f25dfa10b297ad69ad3c54d0669dbd2b3cf29ee7a6 not found: ID does not exist" Nov 22 08:10:49 crc kubenswrapper[4853]: I1122 08:10:49.764473 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" path="/var/lib/kubelet/pods/62b069d5-119a-4e87-b697-86e11053ec1b/volumes" Nov 22 08:10:57 crc kubenswrapper[4853]: I1122 08:10:57.806235 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:57 crc kubenswrapper[4853]: I1122 08:10:57.873649 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:58 crc kubenswrapper[4853]: I1122 08:10:58.055301 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:58 crc kubenswrapper[4853]: I1122 08:10:58.879700 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8b2kt" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" containerID="cri-o://ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445" gracePeriod=2 Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.391110 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.493487 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvgjp\" (UniqueName: \"kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp\") pod \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.494089 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities\") pod \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.494127 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content\") pod \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\" (UID: \"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca\") " Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.494689 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities" (OuterVolumeSpecName: "utilities") pod "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" (UID: "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.496146 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.501485 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp" (OuterVolumeSpecName: "kube-api-access-dvgjp") pod "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" (UID: "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca"). InnerVolumeSpecName "kube-api-access-dvgjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.571141 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" (UID: "3d9c4a73-0b6a-496b-b6d1-71f9cf468aca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.599507 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvgjp\" (UniqueName: \"kubernetes.io/projected/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-kube-api-access-dvgjp\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.599601 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.890228 4853 generic.go:334] "Generic (PLEG): container finished" podID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerID="ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445" exitCode=0 Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.890259 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8b2kt" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.890276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerDied","Data":"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445"} Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.890311 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8b2kt" event={"ID":"3d9c4a73-0b6a-496b-b6d1-71f9cf468aca","Type":"ContainerDied","Data":"72f2b94f2b37e258d77a8aecb3f9d6508942456db77043d8066dd938b986a562"} Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.890326 4853 scope.go:117] "RemoveContainer" containerID="ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.914496 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.917178 4853 scope.go:117] "RemoveContainer" containerID="686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.924337 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8b2kt"] Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.948970 4853 scope.go:117] "RemoveContainer" containerID="761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7" Nov 22 08:10:59 crc kubenswrapper[4853]: I1122 08:10:59.999451 4853 scope.go:117] "RemoveContainer" containerID="ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445" Nov 22 08:10:59 crc kubenswrapper[4853]: E1122 08:10:59.999914 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445\": container with ID starting with ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445 not found: ID does not exist" containerID="ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445" Nov 22 08:11:00 crc kubenswrapper[4853]: I1122 08:10:59.999948 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445"} err="failed to get container status \"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445\": rpc error: code = NotFound desc = could not find container \"ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445\": container with ID starting with ac3166b86010f9080029dc55fe5f1212e9132ec6319432525a1ab1b4b2b2a445 not found: ID does not exist" Nov 22 08:11:00 crc kubenswrapper[4853]: I1122 08:10:59.999971 4853 scope.go:117] "RemoveContainer" containerID="686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8" Nov 22 08:11:00 crc kubenswrapper[4853]: E1122 08:11:00.000458 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8\": container with ID starting with 686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8 not found: ID does not exist" containerID="686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8" Nov 22 08:11:00 crc kubenswrapper[4853]: I1122 08:11:00.000485 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8"} err="failed to get container status \"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8\": rpc error: code = NotFound desc = could not find container \"686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8\": container with ID starting with 686b7f2cd20e6f701b597340b9b856466acb858cdb3ec97c62a4e40485942ae8 not found: ID does not exist" Nov 22 08:11:00 crc kubenswrapper[4853]: I1122 08:11:00.000502 4853 scope.go:117] "RemoveContainer" containerID="761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7" Nov 22 08:11:00 crc kubenswrapper[4853]: E1122 08:11:00.000797 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7\": container with ID starting with 761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7 not found: ID does not exist" containerID="761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7" Nov 22 08:11:00 crc kubenswrapper[4853]: I1122 08:11:00.000826 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7"} err="failed to get container status \"761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7\": rpc error: code = NotFound desc = could not find container \"761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7\": container with ID starting with 761f49d14ebaec132009a6956e5633eae33ca2a55e2e3c27327f62ad143994c7 not found: ID does not exist" Nov 22 08:11:01 crc kubenswrapper[4853]: I1122 08:11:01.297345 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:11:01 crc kubenswrapper[4853]: I1122 08:11:01.297707 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:11:01 crc kubenswrapper[4853]: I1122 08:11:01.764236 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" path="/var/lib/kubelet/pods/3d9c4a73-0b6a-496b-b6d1-71f9cf468aca/volumes" Nov 22 08:11:31 crc kubenswrapper[4853]: I1122 08:11:31.297543 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:11:31 crc kubenswrapper[4853]: I1122 08:11:31.298215 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.297723 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.298391 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.298451 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.299485 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.299544 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5" gracePeriod=600 Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.576109 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5" exitCode=0 Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.576214 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5"} Nov 22 08:12:01 crc kubenswrapper[4853]: I1122 08:12:01.576607 4853 scope.go:117] "RemoveContainer" containerID="3ec8165bab4fb75b436ac780db4c9acf7acf3a80877e56710bdae29a7122f42c" Nov 22 08:12:02 crc kubenswrapper[4853]: I1122 08:12:02.592266 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4"} Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.224005 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.224988 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="extract-content" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225002 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="extract-content" Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.225011 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="extract-utilities" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225017 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="extract-utilities" Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.225026 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="extract-utilities" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225032 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="extract-utilities" Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.225057 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="extract-content" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225064 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="extract-content" Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.225087 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225093 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: E1122 08:12:09.225108 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225113 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225321 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b069d5-119a-4e87-b697-86e11053ec1b" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.225339 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9c4a73-0b6a-496b-b6d1-71f9cf468aca" containerName="registry-server" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.226966 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.239741 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.370383 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvrl\" (UniqueName: \"kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.370773 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.371035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.474047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xvrl\" (UniqueName: \"kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.474266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.474335 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.474894 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.474936 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.493583 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xvrl\" (UniqueName: \"kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl\") pod \"certified-operators-2w2tc\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:09 crc kubenswrapper[4853]: I1122 08:12:09.552757 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:10 crc kubenswrapper[4853]: I1122 08:12:10.151065 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:10 crc kubenswrapper[4853]: I1122 08:12:10.703289 4853 generic.go:334] "Generic (PLEG): container finished" podID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerID="1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256" exitCode=0 Nov 22 08:12:10 crc kubenswrapper[4853]: I1122 08:12:10.703350 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerDied","Data":"1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256"} Nov 22 08:12:10 crc kubenswrapper[4853]: I1122 08:12:10.703422 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerStarted","Data":"6dc5c83d4b73a5db1f359729c5e21353ba4e3579f6a6f4cc6a0aeb0826dbeeab"} Nov 22 08:12:10 crc kubenswrapper[4853]: I1122 08:12:10.705877 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:12:11 crc kubenswrapper[4853]: I1122 08:12:11.727116 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerStarted","Data":"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771"} Nov 22 08:12:13 crc kubenswrapper[4853]: I1122 08:12:13.754130 4853 generic.go:334] "Generic (PLEG): container finished" podID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerID="cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771" exitCode=0 Nov 22 08:12:13 crc kubenswrapper[4853]: I1122 08:12:13.763913 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerDied","Data":"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771"} Nov 22 08:12:14 crc kubenswrapper[4853]: I1122 08:12:14.768559 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerStarted","Data":"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2"} Nov 22 08:12:14 crc kubenswrapper[4853]: I1122 08:12:14.798818 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2w2tc" podStartSLOduration=2.305552052 podStartE2EDuration="5.798792967s" podCreationTimestamp="2025-11-22 08:12:09 +0000 UTC" firstStartedPulling="2025-11-22 08:12:10.7056418 +0000 UTC m=+3729.546264426" lastFinishedPulling="2025-11-22 08:12:14.198882715 +0000 UTC m=+3733.039505341" observedRunningTime="2025-11-22 08:12:14.787957226 +0000 UTC m=+3733.628579862" watchObservedRunningTime="2025-11-22 08:12:14.798792967 +0000 UTC m=+3733.639415593" Nov 22 08:12:19 crc kubenswrapper[4853]: I1122 08:12:19.553243 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:19 crc kubenswrapper[4853]: I1122 08:12:19.554373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:19 crc kubenswrapper[4853]: I1122 08:12:19.612025 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:19 crc kubenswrapper[4853]: I1122 08:12:19.894986 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:19 crc kubenswrapper[4853]: I1122 08:12:19.944392 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:21 crc kubenswrapper[4853]: I1122 08:12:21.853523 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2w2tc" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="registry-server" containerID="cri-o://961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2" gracePeriod=2 Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.363286 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.548156 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities\") pod \"503e673e-922e-4f5e-ba39-19c6d9b0c804\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.548329 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xvrl\" (UniqueName: \"kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl\") pod \"503e673e-922e-4f5e-ba39-19c6d9b0c804\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.548594 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content\") pod \"503e673e-922e-4f5e-ba39-19c6d9b0c804\" (UID: \"503e673e-922e-4f5e-ba39-19c6d9b0c804\") " Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.549247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities" (OuterVolumeSpecName: "utilities") pod "503e673e-922e-4f5e-ba39-19c6d9b0c804" (UID: "503e673e-922e-4f5e-ba39-19c6d9b0c804"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.551539 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.557100 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl" (OuterVolumeSpecName: "kube-api-access-5xvrl") pod "503e673e-922e-4f5e-ba39-19c6d9b0c804" (UID: "503e673e-922e-4f5e-ba39-19c6d9b0c804"). InnerVolumeSpecName "kube-api-access-5xvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.654291 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xvrl\" (UniqueName: \"kubernetes.io/projected/503e673e-922e-4f5e-ba39-19c6d9b0c804-kube-api-access-5xvrl\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.868687 4853 generic.go:334] "Generic (PLEG): container finished" podID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerID="961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2" exitCode=0 Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.868800 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w2tc" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.868818 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerDied","Data":"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2"} Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.870130 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w2tc" event={"ID":"503e673e-922e-4f5e-ba39-19c6d9b0c804","Type":"ContainerDied","Data":"6dc5c83d4b73a5db1f359729c5e21353ba4e3579f6a6f4cc6a0aeb0826dbeeab"} Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.870178 4853 scope.go:117] "RemoveContainer" containerID="961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.896341 4853 scope.go:117] "RemoveContainer" containerID="cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.928283 4853 scope.go:117] "RemoveContainer" containerID="1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.997628 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "503e673e-922e-4f5e-ba39-19c6d9b0c804" (UID: "503e673e-922e-4f5e-ba39-19c6d9b0c804"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.998672 4853 scope.go:117] "RemoveContainer" containerID="961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2" Nov 22 08:12:22 crc kubenswrapper[4853]: E1122 08:12:22.999258 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2\": container with ID starting with 961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2 not found: ID does not exist" containerID="961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.999338 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2"} err="failed to get container status \"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2\": rpc error: code = NotFound desc = could not find container \"961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2\": container with ID starting with 961e8bcae39f84c5220a59058fc6316a5a5ce965b77ab52286993b01a9e0e0e2 not found: ID does not exist" Nov 22 08:12:22 crc kubenswrapper[4853]: I1122 08:12:22.999365 4853 scope.go:117] "RemoveContainer" containerID="cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771" Nov 22 08:12:23 crc kubenswrapper[4853]: E1122 08:12:22.999963 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771\": container with ID starting with cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771 not found: ID does not exist" containerID="cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771" Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:22.999998 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771"} err="failed to get container status \"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771\": rpc error: code = NotFound desc = could not find container \"cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771\": container with ID starting with cc8749bf8ec39a47dbf6d9429107304ccf293129c996e361627551d92e00e771 not found: ID does not exist" Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.000028 4853 scope.go:117] "RemoveContainer" containerID="1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256" Nov 22 08:12:23 crc kubenswrapper[4853]: E1122 08:12:23.000361 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256\": container with ID starting with 1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256 not found: ID does not exist" containerID="1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256" Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.000399 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256"} err="failed to get container status \"1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256\": rpc error: code = NotFound desc = could not find container \"1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256\": container with ID starting with 1a641b91c20f7aaf279b3dfcb48fbcf2589984e3edfbc922d8dbc1925aaeb256 not found: ID does not exist" Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.065654 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/503e673e-922e-4f5e-ba39-19c6d9b0c804-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.206213 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.217285 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2w2tc"] Nov 22 08:12:23 crc kubenswrapper[4853]: I1122 08:12:23.764510 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" path="/var/lib/kubelet/pods/503e673e-922e-4f5e-ba39-19c6d9b0c804/volumes" Nov 22 08:12:24 crc kubenswrapper[4853]: I1122 08:12:24.908383 4853 generic.go:334] "Generic (PLEG): container finished" podID="c7e358cd-fbd6-411d-9231-73e533bbda3b" containerID="54765e929e21fb2795af31a2817ffc7dd25f7a6be3f25750f2ebf84aa29ac981" exitCode=0 Nov 22 08:12:24 crc kubenswrapper[4853]: I1122 08:12:24.908478 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" event={"ID":"c7e358cd-fbd6-411d-9231-73e533bbda3b","Type":"ContainerDied","Data":"54765e929e21fb2795af31a2817ffc7dd25f7a6be3f25750f2ebf84aa29ac981"} Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.427893 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.565310 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle\") pod \"c7e358cd-fbd6-411d-9231-73e533bbda3b\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.565381 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0\") pod \"c7e358cd-fbd6-411d-9231-73e533bbda3b\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.565497 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key\") pod \"c7e358cd-fbd6-411d-9231-73e533bbda3b\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.565540 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzgbh\" (UniqueName: \"kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh\") pod \"c7e358cd-fbd6-411d-9231-73e533bbda3b\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.565622 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory\") pod \"c7e358cd-fbd6-411d-9231-73e533bbda3b\" (UID: \"c7e358cd-fbd6-411d-9231-73e533bbda3b\") " Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.573927 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c7e358cd-fbd6-411d-9231-73e533bbda3b" (UID: "c7e358cd-fbd6-411d-9231-73e533bbda3b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.574480 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh" (OuterVolumeSpecName: "kube-api-access-kzgbh") pod "c7e358cd-fbd6-411d-9231-73e533bbda3b" (UID: "c7e358cd-fbd6-411d-9231-73e533bbda3b"). InnerVolumeSpecName "kube-api-access-kzgbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.608535 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c7e358cd-fbd6-411d-9231-73e533bbda3b" (UID: "c7e358cd-fbd6-411d-9231-73e533bbda3b"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.608687 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c7e358cd-fbd6-411d-9231-73e533bbda3b" (UID: "c7e358cd-fbd6-411d-9231-73e533bbda3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.613979 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory" (OuterVolumeSpecName: "inventory") pod "c7e358cd-fbd6-411d-9231-73e533bbda3b" (UID: "c7e358cd-fbd6-411d-9231-73e533bbda3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.670163 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.670227 4853 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.670239 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.670253 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzgbh\" (UniqueName: \"kubernetes.io/projected/c7e358cd-fbd6-411d-9231-73e533bbda3b-kube-api-access-kzgbh\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.670265 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7e358cd-fbd6-411d-9231-73e533bbda3b-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.937196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" event={"ID":"c7e358cd-fbd6-411d-9231-73e533bbda3b","Type":"ContainerDied","Data":"863e24649b803d676a3e776f464e1ca28f99f59d8fe3e2850bfbea25b091e157"} Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.937237 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz" Nov 22 08:12:26 crc kubenswrapper[4853]: I1122 08:12:26.937247 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="863e24649b803d676a3e776f464e1ca28f99f59d8fe3e2850bfbea25b091e157" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.036944 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm"] Nov 22 08:12:27 crc kubenswrapper[4853]: E1122 08:12:27.037921 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="registry-server" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.037951 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="registry-server" Nov 22 08:12:27 crc kubenswrapper[4853]: E1122 08:12:27.037979 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e358cd-fbd6-411d-9231-73e533bbda3b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.037997 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e358cd-fbd6-411d-9231-73e533bbda3b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 22 08:12:27 crc kubenswrapper[4853]: E1122 08:12:27.038012 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="extract-utilities" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.038022 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="extract-utilities" Nov 22 08:12:27 crc kubenswrapper[4853]: E1122 08:12:27.038057 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="extract-content" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.038064 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="extract-content" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.039026 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e358cd-fbd6-411d-9231-73e533bbda3b" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.039105 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="503e673e-922e-4f5e-ba39-19c6d9b0c804" containerName="registry-server" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.040645 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.044422 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.044861 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.045082 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.045788 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.046032 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.046854 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.047061 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.055097 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm"] Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.189677 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.189793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.190556 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.190600 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.191149 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.191324 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.191427 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdtk7\" (UniqueName: \"kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.191597 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.191706 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.294470 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.294561 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.294628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdtk7\" (UniqueName: \"kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.294719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.294818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.295006 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.295052 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.295141 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.295175 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.296089 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.298865 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.301935 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.303070 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.304038 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.304164 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.304203 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.305062 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.312253 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdtk7\" (UniqueName: \"kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2fbqm\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.367396 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.936903 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm"] Nov 22 08:12:27 crc kubenswrapper[4853]: I1122 08:12:27.948790 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" event={"ID":"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5","Type":"ContainerStarted","Data":"ceffe78b32221bf1d25e4a9041728ab01538eceab7a514b8cefffb5ee278edb8"} Nov 22 08:12:28 crc kubenswrapper[4853]: I1122 08:12:28.966708 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" event={"ID":"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5","Type":"ContainerStarted","Data":"ccc3f15b748f3c849c721fe521d62aacce4508a570a35f17531bc9865faf2658"} Nov 22 08:12:28 crc kubenswrapper[4853]: I1122 08:12:28.999800 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" podStartSLOduration=1.388992234 podStartE2EDuration="1.999738776s" podCreationTimestamp="2025-11-22 08:12:27 +0000 UTC" firstStartedPulling="2025-11-22 08:12:27.941991838 +0000 UTC m=+3746.782614464" lastFinishedPulling="2025-11-22 08:12:28.55273837 +0000 UTC m=+3747.393361006" observedRunningTime="2025-11-22 08:12:28.989484051 +0000 UTC m=+3747.830106697" watchObservedRunningTime="2025-11-22 08:12:28.999738776 +0000 UTC m=+3747.840361402" Nov 22 08:14:01 crc kubenswrapper[4853]: I1122 08:14:01.297436 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:14:01 crc kubenswrapper[4853]: I1122 08:14:01.298121 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:14:31 crc kubenswrapper[4853]: I1122 08:14:31.297264 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:14:31 crc kubenswrapper[4853]: I1122 08:14:31.297830 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.158131 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx"] Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.160807 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.163351 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.163360 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.171939 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx"] Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.285924 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.286515 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.286690 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvf8\" (UniqueName: \"kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.389036 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.389464 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvf8\" (UniqueName: \"kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.389513 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.389897 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.395877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.409212 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvf8\" (UniqueName: \"kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8\") pod \"collect-profiles-29396655-5bddx\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.487070 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:00 crc kubenswrapper[4853]: I1122 08:15:00.980027 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx"] Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.297294 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.297825 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.297896 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.299199 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.299282 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" gracePeriod=600 Nov 22 08:15:01 crc kubenswrapper[4853]: E1122 08:15:01.425268 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.689343 4853 generic.go:334] "Generic (PLEG): container finished" podID="b338e654-d135-4701-86a0-7d543b9fed30" containerID="3f2780b0f8a4b86b22ec0161a79ffb691be2e6637cd1ceeb6719bd311eb7f6a7" exitCode=0 Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.689401 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" event={"ID":"b338e654-d135-4701-86a0-7d543b9fed30","Type":"ContainerDied","Data":"3f2780b0f8a4b86b22ec0161a79ffb691be2e6637cd1ceeb6719bd311eb7f6a7"} Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.689451 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" event={"ID":"b338e654-d135-4701-86a0-7d543b9fed30","Type":"ContainerStarted","Data":"0c394afdbfcd99263941b04b2b2a0d9fa96ec6cf329d14c0e7b9e0b0b1f77c93"} Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.692887 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" exitCode=0 Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.692944 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4"} Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.692992 4853 scope.go:117] "RemoveContainer" containerID="b5c95fc5e1ba497c01e2e8e3690b9c21c741f0883f87a9c7ab06d100befb50f5" Nov 22 08:15:01 crc kubenswrapper[4853]: I1122 08:15:01.693850 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:15:01 crc kubenswrapper[4853]: E1122 08:15:01.694290 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.161737 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.271630 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume\") pod \"b338e654-d135-4701-86a0-7d543b9fed30\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.271891 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume\") pod \"b338e654-d135-4701-86a0-7d543b9fed30\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.272007 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvf8\" (UniqueName: \"kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8\") pod \"b338e654-d135-4701-86a0-7d543b9fed30\" (UID: \"b338e654-d135-4701-86a0-7d543b9fed30\") " Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.273008 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume" (OuterVolumeSpecName: "config-volume") pod "b338e654-d135-4701-86a0-7d543b9fed30" (UID: "b338e654-d135-4701-86a0-7d543b9fed30"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.273707 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b338e654-d135-4701-86a0-7d543b9fed30-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.280389 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b338e654-d135-4701-86a0-7d543b9fed30" (UID: "b338e654-d135-4701-86a0-7d543b9fed30"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.280521 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8" (OuterVolumeSpecName: "kube-api-access-qgvf8") pod "b338e654-d135-4701-86a0-7d543b9fed30" (UID: "b338e654-d135-4701-86a0-7d543b9fed30"). InnerVolumeSpecName "kube-api-access-qgvf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.376452 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b338e654-d135-4701-86a0-7d543b9fed30-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.376544 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvf8\" (UniqueName: \"kubernetes.io/projected/b338e654-d135-4701-86a0-7d543b9fed30-kube-api-access-qgvf8\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.719821 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" event={"ID":"b338e654-d135-4701-86a0-7d543b9fed30","Type":"ContainerDied","Data":"0c394afdbfcd99263941b04b2b2a0d9fa96ec6cf329d14c0e7b9e0b0b1f77c93"} Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.720114 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c394afdbfcd99263941b04b2b2a0d9fa96ec6cf329d14c0e7b9e0b0b1f77c93" Nov 22 08:15:03 crc kubenswrapper[4853]: I1122 08:15:03.719861 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx" Nov 22 08:15:04 crc kubenswrapper[4853]: I1122 08:15:04.244927 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm"] Nov 22 08:15:04 crc kubenswrapper[4853]: I1122 08:15:04.255539 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-99mxm"] Nov 22 08:15:05 crc kubenswrapper[4853]: I1122 08:15:05.765672 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837e951f-77bd-402e-b8b0-3cb6bc2f2e03" path="/var/lib/kubelet/pods/837e951f-77bd-402e-b8b0-3cb6bc2f2e03/volumes" Nov 22 08:15:13 crc kubenswrapper[4853]: I1122 08:15:13.336237 4853 generic.go:334] "Generic (PLEG): container finished" podID="5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" containerID="ccc3f15b748f3c849c721fe521d62aacce4508a570a35f17531bc9865faf2658" exitCode=0 Nov 22 08:15:13 crc kubenswrapper[4853]: I1122 08:15:13.336328 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" event={"ID":"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5","Type":"ContainerDied","Data":"ccc3f15b748f3c849c721fe521d62aacce4508a570a35f17531bc9865faf2658"} Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.850783 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.929829 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930018 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930050 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930088 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930136 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930341 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.930368 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.931059 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdtk7\" (UniqueName: \"kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.931155 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0\") pod \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\" (UID: \"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5\") " Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.936426 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7" (OuterVolumeSpecName: "kube-api-access-jdtk7") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "kube-api-access-jdtk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.936475 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.963032 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.966999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.968259 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.970418 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.972082 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.981512 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory" (OuterVolumeSpecName: "inventory") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:14 crc kubenswrapper[4853]: I1122 08:15:14.982932 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" (UID: "5ec783e2-47c2-4362-84ac-cdaa7f0b75e5"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035355 4853 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035396 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035409 4853 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035423 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035437 4853 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035450 4853 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035462 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdtk7\" (UniqueName: \"kubernetes.io/projected/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-kube-api-access-jdtk7\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035475 4853 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.035488 4853 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5ec783e2-47c2-4362-84ac-cdaa7f0b75e5-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.360927 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" event={"ID":"5ec783e2-47c2-4362-84ac-cdaa7f0b75e5","Type":"ContainerDied","Data":"ceffe78b32221bf1d25e4a9041728ab01538eceab7a514b8cefffb5ee278edb8"} Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.361241 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2fbqm" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.361247 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceffe78b32221bf1d25e4a9041728ab01538eceab7a514b8cefffb5ee278edb8" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.504965 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9"] Nov 22 08:15:15 crc kubenswrapper[4853]: E1122 08:15:15.505592 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b338e654-d135-4701-86a0-7d543b9fed30" containerName="collect-profiles" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.505615 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b338e654-d135-4701-86a0-7d543b9fed30" containerName="collect-profiles" Nov 22 08:15:15 crc kubenswrapper[4853]: E1122 08:15:15.505661 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.505672 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.505984 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec783e2-47c2-4362-84ac-cdaa7f0b75e5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.506012 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b338e654-d135-4701-86a0-7d543b9fed30" containerName="collect-profiles" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.507144 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.512607 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.512869 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.513080 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.513107 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.513294 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.522031 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9"] Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549637 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549793 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549830 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549855 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549881 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549915 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.549991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbk4\" (UniqueName: \"kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652386 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652469 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652506 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652540 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652717 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbk4\" (UniqueName: \"kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.652917 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.659272 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.659271 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.659640 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.660215 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.662365 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.664000 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.670728 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbk4\" (UniqueName: \"kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.760628 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:15:15 crc kubenswrapper[4853]: E1122 08:15:15.764770 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.829128 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:15:15 crc kubenswrapper[4853]: I1122 08:15:15.837908 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:15:16 crc kubenswrapper[4853]: I1122 08:15:16.409414 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9"] Nov 22 08:15:16 crc kubenswrapper[4853]: I1122 08:15:16.918226 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:15:17 crc kubenswrapper[4853]: I1122 08:15:17.384160 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" event={"ID":"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6","Type":"ContainerStarted","Data":"4ef317d2c08b4856268c62a6b06a23a5058bb7d7ac844b154dc1d76edead818c"} Nov 22 08:15:17 crc kubenswrapper[4853]: I1122 08:15:17.384474 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" event={"ID":"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6","Type":"ContainerStarted","Data":"f837b1ccffd89ddf760e999dd6919ab5e28ebbb0a498ef95c1435153ae2e8fd5"} Nov 22 08:15:17 crc kubenswrapper[4853]: I1122 08:15:17.409122 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" podStartSLOduration=1.91075589 podStartE2EDuration="2.409104473s" podCreationTimestamp="2025-11-22 08:15:15 +0000 UTC" firstStartedPulling="2025-11-22 08:15:16.417033512 +0000 UTC m=+3915.257656138" lastFinishedPulling="2025-11-22 08:15:16.915382095 +0000 UTC m=+3915.756004721" observedRunningTime="2025-11-22 08:15:17.409053042 +0000 UTC m=+3916.249675668" watchObservedRunningTime="2025-11-22 08:15:17.409104473 +0000 UTC m=+3916.249727099" Nov 22 08:15:29 crc kubenswrapper[4853]: I1122 08:15:29.749324 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:15:29 crc kubenswrapper[4853]: E1122 08:15:29.751268 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:15:43 crc kubenswrapper[4853]: I1122 08:15:43.747375 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:15:43 crc kubenswrapper[4853]: E1122 08:15:43.748158 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:15:57 crc kubenswrapper[4853]: I1122 08:15:57.747965 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:15:57 crc kubenswrapper[4853]: E1122 08:15:57.748839 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:16:03 crc kubenswrapper[4853]: I1122 08:16:03.051824 4853 scope.go:117] "RemoveContainer" containerID="72db45160eeff40ee43a2752e399b2b8a4f122a26ddd84be7500a32483f6ae15" Nov 22 08:16:11 crc kubenswrapper[4853]: I1122 08:16:11.748475 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:16:11 crc kubenswrapper[4853]: E1122 08:16:11.749503 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:16:22 crc kubenswrapper[4853]: I1122 08:16:22.748399 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:16:22 crc kubenswrapper[4853]: E1122 08:16:22.749263 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:16:36 crc kubenswrapper[4853]: I1122 08:16:36.749369 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:16:36 crc kubenswrapper[4853]: E1122 08:16:36.750130 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:16:48 crc kubenswrapper[4853]: I1122 08:16:48.748437 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:16:48 crc kubenswrapper[4853]: E1122 08:16:48.749376 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:02 crc kubenswrapper[4853]: I1122 08:17:02.747663 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:17:02 crc kubenswrapper[4853]: E1122 08:17:02.748491 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.037372 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.041354 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.067162 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.191198 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qswvq\" (UniqueName: \"kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.191600 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.191819 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.294066 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.294229 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qswvq\" (UniqueName: \"kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.294303 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.294628 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.294772 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.321788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qswvq\" (UniqueName: \"kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq\") pod \"redhat-marketplace-z9dzq\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.364043 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:09 crc kubenswrapper[4853]: I1122 08:17:09.876973 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:10 crc kubenswrapper[4853]: I1122 08:17:10.650533 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerID="caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58" exitCode=0 Nov 22 08:17:10 crc kubenswrapper[4853]: I1122 08:17:10.650636 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerDied","Data":"caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58"} Nov 22 08:17:10 crc kubenswrapper[4853]: I1122 08:17:10.650853 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerStarted","Data":"f02f9099f7fb4745b08fb9a55b8381cd4470869e156e12997426d275a4833e0f"} Nov 22 08:17:11 crc kubenswrapper[4853]: I1122 08:17:11.667441 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerStarted","Data":"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608"} Nov 22 08:17:13 crc kubenswrapper[4853]: I1122 08:17:13.688628 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerID="13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608" exitCode=0 Nov 22 08:17:13 crc kubenswrapper[4853]: I1122 08:17:13.688705 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerDied","Data":"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608"} Nov 22 08:17:13 crc kubenswrapper[4853]: I1122 08:17:13.691835 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:17:13 crc kubenswrapper[4853]: I1122 08:17:13.748947 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:17:13 crc kubenswrapper[4853]: E1122 08:17:13.749273 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:14 crc kubenswrapper[4853]: I1122 08:17:14.702501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerStarted","Data":"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8"} Nov 22 08:17:14 crc kubenswrapper[4853]: I1122 08:17:14.727626 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9dzq" podStartSLOduration=1.972381231 podStartE2EDuration="5.727608444s" podCreationTimestamp="2025-11-22 08:17:09 +0000 UTC" firstStartedPulling="2025-11-22 08:17:10.65293078 +0000 UTC m=+4029.493553396" lastFinishedPulling="2025-11-22 08:17:14.408157983 +0000 UTC m=+4033.248780609" observedRunningTime="2025-11-22 08:17:14.720344079 +0000 UTC m=+4033.560966705" watchObservedRunningTime="2025-11-22 08:17:14.727608444 +0000 UTC m=+4033.568231070" Nov 22 08:17:19 crc kubenswrapper[4853]: I1122 08:17:19.364556 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:19 crc kubenswrapper[4853]: I1122 08:17:19.365149 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:19 crc kubenswrapper[4853]: I1122 08:17:19.416336 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:19 crc kubenswrapper[4853]: I1122 08:17:19.803705 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:19 crc kubenswrapper[4853]: I1122 08:17:19.857633 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:21 crc kubenswrapper[4853]: I1122 08:17:21.770967 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9dzq" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="registry-server" containerID="cri-o://cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8" gracePeriod=2 Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.307262 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.421211 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content\") pod \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.421507 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qswvq\" (UniqueName: \"kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq\") pod \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.421553 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities\") pod \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\" (UID: \"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0\") " Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.422346 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities" (OuterVolumeSpecName: "utilities") pod "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" (UID: "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.428164 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq" (OuterVolumeSpecName: "kube-api-access-qswvq") pod "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" (UID: "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0"). InnerVolumeSpecName "kube-api-access-qswvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.443938 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" (UID: "9f78dc4e-fc3b-4737-ae70-f471e46c3ed0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.524519 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qswvq\" (UniqueName: \"kubernetes.io/projected/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-kube-api-access-qswvq\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.524574 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.524585 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.782285 4853 generic.go:334] "Generic (PLEG): container finished" podID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerID="cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8" exitCode=0 Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.782331 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerDied","Data":"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8"} Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.782362 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9dzq" event={"ID":"9f78dc4e-fc3b-4737-ae70-f471e46c3ed0","Type":"ContainerDied","Data":"f02f9099f7fb4745b08fb9a55b8381cd4470869e156e12997426d275a4833e0f"} Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.782380 4853 scope.go:117] "RemoveContainer" containerID="cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.782408 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9dzq" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.805945 4853 scope.go:117] "RemoveContainer" containerID="13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.814690 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.825935 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9dzq"] Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.841009 4853 scope.go:117] "RemoveContainer" containerID="caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.894390 4853 scope.go:117] "RemoveContainer" containerID="cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8" Nov 22 08:17:22 crc kubenswrapper[4853]: E1122 08:17:22.894897 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8\": container with ID starting with cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8 not found: ID does not exist" containerID="cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.894933 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8"} err="failed to get container status \"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8\": rpc error: code = NotFound desc = could not find container \"cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8\": container with ID starting with cbb104adf6eb0c57f956d3a247b0018bdf9a47115249ae58adade275a47a6fb8 not found: ID does not exist" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.894970 4853 scope.go:117] "RemoveContainer" containerID="13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608" Nov 22 08:17:22 crc kubenswrapper[4853]: E1122 08:17:22.895480 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608\": container with ID starting with 13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608 not found: ID does not exist" containerID="13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.895527 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608"} err="failed to get container status \"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608\": rpc error: code = NotFound desc = could not find container \"13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608\": container with ID starting with 13d23ee6cf454c3ac75a624e8443c15c9f06bc8673d189438d27d64af9461608 not found: ID does not exist" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.895587 4853 scope.go:117] "RemoveContainer" containerID="caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58" Nov 22 08:17:22 crc kubenswrapper[4853]: E1122 08:17:22.896051 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58\": container with ID starting with caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58 not found: ID does not exist" containerID="caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58" Nov 22 08:17:22 crc kubenswrapper[4853]: I1122 08:17:22.896077 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58"} err="failed to get container status \"caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58\": rpc error: code = NotFound desc = could not find container \"caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58\": container with ID starting with caf32faf15fdfe27da5e46d646d05a47832a89b17566f0524302a53acf533f58 not found: ID does not exist" Nov 22 08:17:23 crc kubenswrapper[4853]: I1122 08:17:23.766385 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" path="/var/lib/kubelet/pods/9f78dc4e-fc3b-4737-ae70-f471e46c3ed0/volumes" Nov 22 08:17:26 crc kubenswrapper[4853]: I1122 08:17:26.747650 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:17:26 crc kubenswrapper[4853]: E1122 08:17:26.748538 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:38 crc kubenswrapper[4853]: I1122 08:17:38.747650 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:17:38 crc kubenswrapper[4853]: E1122 08:17:38.748401 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:48 crc kubenswrapper[4853]: I1122 08:17:48.071349 4853 generic.go:334] "Generic (PLEG): container finished" podID="b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" containerID="4ef317d2c08b4856268c62a6b06a23a5058bb7d7ac844b154dc1d76edead818c" exitCode=0 Nov 22 08:17:48 crc kubenswrapper[4853]: I1122 08:17:48.071986 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" event={"ID":"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6","Type":"ContainerDied","Data":"4ef317d2c08b4856268c62a6b06a23a5058bb7d7ac844b154dc1d76edead818c"} Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.587379 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.687717 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.687807 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.687854 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.687908 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.688079 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.688210 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.688345 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbk4\" (UniqueName: \"kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4\") pod \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\" (UID: \"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6\") " Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.693404 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.693710 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4" (OuterVolumeSpecName: "kube-api-access-qtbk4") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "kube-api-access-qtbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.720876 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory" (OuterVolumeSpecName: "inventory") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.727524 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.727934 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.728261 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.729678 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" (UID: "b9d13e92-cc8c-45a2-a122-0af7c97fe7e6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791223 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791255 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbk4\" (UniqueName: \"kubernetes.io/projected/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-kube-api-access-qtbk4\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791266 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791274 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791283 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791291 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:49 crc kubenswrapper[4853]: I1122 08:17:49.791300 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9d13e92-cc8c-45a2-a122-0af7c97fe7e6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.096702 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" event={"ID":"b9d13e92-cc8c-45a2-a122-0af7c97fe7e6","Type":"ContainerDied","Data":"f837b1ccffd89ddf760e999dd6919ab5e28ebbb0a498ef95c1435153ae2e8fd5"} Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.096740 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f837b1ccffd89ddf760e999dd6919ab5e28ebbb0a498ef95c1435153ae2e8fd5" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.096759 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.194882 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx"] Nov 22 08:17:50 crc kubenswrapper[4853]: E1122 08:17:50.195589 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="registry-server" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.195674 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="registry-server" Nov 22 08:17:50 crc kubenswrapper[4853]: E1122 08:17:50.195891 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="extract-utilities" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.195975 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="extract-utilities" Nov 22 08:17:50 crc kubenswrapper[4853]: E1122 08:17:50.196090 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.196179 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 22 08:17:50 crc kubenswrapper[4853]: E1122 08:17:50.196267 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="extract-content" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.196357 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="extract-content" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.196651 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f78dc4e-fc3b-4737-ae70-f471e46c3ed0" containerName="registry-server" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.196784 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9d13e92-cc8c-45a2-a122-0af7c97fe7e6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.197914 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.200373 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.200555 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.201156 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.201498 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.201624 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.211393 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx"] Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.303983 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.304291 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.304555 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.304627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.304715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55p6g\" (UniqueName: \"kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.304991 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.305054 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407257 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407330 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407378 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55p6g\" (UniqueName: \"kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407477 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407538 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407590 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.407651 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.413636 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.413975 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.413666 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.414278 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.414448 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.414554 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.425297 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55p6g\" (UniqueName: \"kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:50 crc kubenswrapper[4853]: I1122 08:17:50.517946 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:17:51 crc kubenswrapper[4853]: I1122 08:17:51.633960 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx"] Nov 22 08:17:51 crc kubenswrapper[4853]: I1122 08:17:51.748846 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:17:51 crc kubenswrapper[4853]: E1122 08:17:51.749322 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:17:52 crc kubenswrapper[4853]: I1122 08:17:52.120676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" event={"ID":"e68d04f1-7a40-4197-b65b-2be6e53f9ff3","Type":"ContainerStarted","Data":"cd1548c83028aa6d264baebe497e16c4a7c466cdd7ed274fe9e744cd11c11cd1"} Nov 22 08:17:53 crc kubenswrapper[4853]: I1122 08:17:53.133217 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" event={"ID":"e68d04f1-7a40-4197-b65b-2be6e53f9ff3","Type":"ContainerStarted","Data":"79faee9e9af3975c83d012aabf5de52349866bacfa9548ab7d9c2b61e0ae05f3"} Nov 22 08:17:53 crc kubenswrapper[4853]: I1122 08:17:53.149265 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" podStartSLOduration=2.295316306 podStartE2EDuration="3.14923348s" podCreationTimestamp="2025-11-22 08:17:50 +0000 UTC" firstStartedPulling="2025-11-22 08:17:51.631578115 +0000 UTC m=+4070.472200741" lastFinishedPulling="2025-11-22 08:17:52.485495289 +0000 UTC m=+4071.326117915" observedRunningTime="2025-11-22 08:17:53.147158024 +0000 UTC m=+4071.987780650" watchObservedRunningTime="2025-11-22 08:17:53.14923348 +0000 UTC m=+4071.989856106" Nov 22 08:18:03 crc kubenswrapper[4853]: I1122 08:18:03.748375 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:18:03 crc kubenswrapper[4853]: E1122 08:18:03.749278 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:18:18 crc kubenswrapper[4853]: I1122 08:18:18.748937 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:18:18 crc kubenswrapper[4853]: E1122 08:18:18.749648 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:18:30 crc kubenswrapper[4853]: I1122 08:18:30.748606 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:18:30 crc kubenswrapper[4853]: E1122 08:18:30.749533 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:18:42 crc kubenswrapper[4853]: I1122 08:18:42.748104 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:18:42 crc kubenswrapper[4853]: E1122 08:18:42.749022 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:18:55 crc kubenswrapper[4853]: I1122 08:18:55.757094 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:18:55 crc kubenswrapper[4853]: E1122 08:18:55.757999 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:19:08 crc kubenswrapper[4853]: I1122 08:19:08.748784 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:19:08 crc kubenswrapper[4853]: E1122 08:19:08.749976 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:19:21 crc kubenswrapper[4853]: I1122 08:19:21.749078 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:19:21 crc kubenswrapper[4853]: E1122 08:19:21.750068 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:19:33 crc kubenswrapper[4853]: I1122 08:19:33.748114 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:19:33 crc kubenswrapper[4853]: E1122 08:19:33.748976 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:19:44 crc kubenswrapper[4853]: I1122 08:19:44.748501 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:19:44 crc kubenswrapper[4853]: E1122 08:19:44.749334 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:19:58 crc kubenswrapper[4853]: I1122 08:19:58.747866 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:19:58 crc kubenswrapper[4853]: E1122 08:19:58.748700 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:20:05 crc kubenswrapper[4853]: I1122 08:20:05.649861 4853 generic.go:334] "Generic (PLEG): container finished" podID="e68d04f1-7a40-4197-b65b-2be6e53f9ff3" containerID="79faee9e9af3975c83d012aabf5de52349866bacfa9548ab7d9c2b61e0ae05f3" exitCode=0 Nov 22 08:20:05 crc kubenswrapper[4853]: I1122 08:20:05.649947 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" event={"ID":"e68d04f1-7a40-4197-b65b-2be6e53f9ff3","Type":"ContainerDied","Data":"79faee9e9af3975c83d012aabf5de52349866bacfa9548ab7d9c2b61e0ae05f3"} Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.067729 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158020 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158322 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55p6g\" (UniqueName: \"kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158396 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158446 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158479 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.158606 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2\") pod \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\" (UID: \"e68d04f1-7a40-4197-b65b-2be6e53f9ff3\") " Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.163893 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.164523 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g" (OuterVolumeSpecName: "kube-api-access-55p6g") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "kube-api-access-55p6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.190677 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.191145 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.191464 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.193999 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory" (OuterVolumeSpecName: "inventory") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.194212 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "e68d04f1-7a40-4197-b65b-2be6e53f9ff3" (UID: "e68d04f1-7a40-4197-b65b-2be6e53f9ff3"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261726 4853 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261790 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55p6g\" (UniqueName: \"kubernetes.io/projected/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-kube-api-access-55p6g\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261804 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261815 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261827 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261837 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.261849 4853 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/e68d04f1-7a40-4197-b65b-2be6e53f9ff3-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.675283 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" event={"ID":"e68d04f1-7a40-4197-b65b-2be6e53f9ff3","Type":"ContainerDied","Data":"cd1548c83028aa6d264baebe497e16c4a7c466cdd7ed274fe9e744cd11c11cd1"} Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.675322 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd1548c83028aa6d264baebe497e16c4a7c466cdd7ed274fe9e744cd11c11cd1" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.675380 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.761857 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl"] Nov 22 08:20:07 crc kubenswrapper[4853]: E1122 08:20:07.762345 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68d04f1-7a40-4197-b65b-2be6e53f9ff3" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.762371 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68d04f1-7a40-4197-b65b-2be6e53f9ff3" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.762784 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68d04f1-7a40-4197-b65b-2be6e53f9ff3" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.763907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.767917 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.768100 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.768157 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-km5tw" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.768261 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.768584 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.776125 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl"] Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.879312 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.879576 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcphs\" (UniqueName: \"kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.879630 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.879726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.879993 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.982092 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcphs\" (UniqueName: \"kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.982171 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.982246 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.982323 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.982408 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.988785 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.989263 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.991539 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:07 crc kubenswrapper[4853]: I1122 08:20:07.992822 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:08 crc kubenswrapper[4853]: I1122 08:20:08.005577 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcphs\" (UniqueName: \"kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wb9bl\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:08 crc kubenswrapper[4853]: I1122 08:20:08.092008 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:08 crc kubenswrapper[4853]: I1122 08:20:08.654710 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl"] Nov 22 08:20:08 crc kubenswrapper[4853]: I1122 08:20:08.692602 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" event={"ID":"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83","Type":"ContainerStarted","Data":"4690f804425cac2bdfcc090a156c428c4605e2343de8ed4be36f206dd77e2567"} Nov 22 08:20:09 crc kubenswrapper[4853]: I1122 08:20:09.704027 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" event={"ID":"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83","Type":"ContainerStarted","Data":"4e26413f4d4b87afb323b6d8897bc6161adeb3b2bce26945fb3a08875b6bacd2"} Nov 22 08:20:09 crc kubenswrapper[4853]: I1122 08:20:09.731626 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" podStartSLOduration=2.053384141 podStartE2EDuration="2.731604131s" podCreationTimestamp="2025-11-22 08:20:07 +0000 UTC" firstStartedPulling="2025-11-22 08:20:08.672346924 +0000 UTC m=+4207.512969550" lastFinishedPulling="2025-11-22 08:20:09.350566914 +0000 UTC m=+4208.191189540" observedRunningTime="2025-11-22 08:20:09.719046133 +0000 UTC m=+4208.559668759" watchObservedRunningTime="2025-11-22 08:20:09.731604131 +0000 UTC m=+4208.572226757" Nov 22 08:20:10 crc kubenswrapper[4853]: I1122 08:20:10.747916 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:20:11 crc kubenswrapper[4853]: I1122 08:20:11.730223 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a"} Nov 22 08:20:26 crc kubenswrapper[4853]: I1122 08:20:26.896364 4853 generic.go:334] "Generic (PLEG): container finished" podID="f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" containerID="4e26413f4d4b87afb323b6d8897bc6161adeb3b2bce26945fb3a08875b6bacd2" exitCode=0 Nov 22 08:20:26 crc kubenswrapper[4853]: I1122 08:20:26.896424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" event={"ID":"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83","Type":"ContainerDied","Data":"4e26413f4d4b87afb323b6d8897bc6161adeb3b2bce26945fb3a08875b6bacd2"} Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.413062 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.497422 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcphs\" (UniqueName: \"kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs\") pod \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.497524 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1\") pod \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.497771 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0\") pod \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.497827 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory\") pod \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.497861 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key\") pod \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\" (UID: \"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83\") " Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.506358 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs" (OuterVolumeSpecName: "kube-api-access-zcphs") pod "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" (UID: "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83"). InnerVolumeSpecName "kube-api-access-zcphs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.535119 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" (UID: "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.538073 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" (UID: "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.547118 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" (UID: "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.549432 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory" (OuterVolumeSpecName: "inventory") pod "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" (UID: "f6cbf49f-1ec5-4c85-8220-8b569c9aaa83"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.601499 4853 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.601528 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.601538 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcphs\" (UniqueName: \"kubernetes.io/projected/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-kube-api-access-zcphs\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.601550 4853 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.601560 4853 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f6cbf49f-1ec5-4c85-8220-8b569c9aaa83-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.918957 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" event={"ID":"f6cbf49f-1ec5-4c85-8220-8b569c9aaa83","Type":"ContainerDied","Data":"4690f804425cac2bdfcc090a156c428c4605e2343de8ed4be36f206dd77e2567"} Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.919355 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4690f804425cac2bdfcc090a156c428c4605e2343de8ed4be36f206dd77e2567" Nov 22 08:20:28 crc kubenswrapper[4853]: I1122 08:20:28.919056 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wb9bl" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.775408 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:21:48 crc kubenswrapper[4853]: E1122 08:21:48.776636 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.776663 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.776983 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cbf49f-1ec5-4c85-8220-8b569c9aaa83" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.779079 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.822037 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.916357 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.916527 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:48 crc kubenswrapper[4853]: I1122 08:21:48.916715 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgnjs\" (UniqueName: \"kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.019476 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgnjs\" (UniqueName: \"kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.019981 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.020106 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.020517 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.020530 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.041432 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgnjs\" (UniqueName: \"kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs\") pod \"redhat-operators-42h7x\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.113061 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.660778 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:21:49 crc kubenswrapper[4853]: I1122 08:21:49.807467 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerStarted","Data":"3f00098f19f99549a860ab4dba84dcf1b306e3c3072ddbe33601f9b85c767205"} Nov 22 08:21:50 crc kubenswrapper[4853]: I1122 08:21:50.822901 4853 generic.go:334] "Generic (PLEG): container finished" podID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerID="6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef" exitCode=0 Nov 22 08:21:50 crc kubenswrapper[4853]: I1122 08:21:50.823175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerDied","Data":"6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef"} Nov 22 08:21:51 crc kubenswrapper[4853]: I1122 08:21:51.835114 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerStarted","Data":"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb"} Nov 22 08:21:59 crc kubenswrapper[4853]: I1122 08:21:59.934742 4853 generic.go:334] "Generic (PLEG): container finished" podID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerID="ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb" exitCode=0 Nov 22 08:21:59 crc kubenswrapper[4853]: I1122 08:21:59.934864 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerDied","Data":"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb"} Nov 22 08:22:01 crc kubenswrapper[4853]: I1122 08:22:01.965870 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerStarted","Data":"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d"} Nov 22 08:22:01 crc kubenswrapper[4853]: I1122 08:22:01.992339 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-42h7x" podStartSLOduration=3.750994242 podStartE2EDuration="13.99231626s" podCreationTimestamp="2025-11-22 08:21:48 +0000 UTC" firstStartedPulling="2025-11-22 08:21:50.828239715 +0000 UTC m=+4309.668862341" lastFinishedPulling="2025-11-22 08:22:01.069561733 +0000 UTC m=+4319.910184359" observedRunningTime="2025-11-22 08:22:01.987767328 +0000 UTC m=+4320.828389974" watchObservedRunningTime="2025-11-22 08:22:01.99231626 +0000 UTC m=+4320.832938886" Nov 22 08:22:09 crc kubenswrapper[4853]: I1122 08:22:09.113445 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:09 crc kubenswrapper[4853]: I1122 08:22:09.115292 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:10 crc kubenswrapper[4853]: I1122 08:22:10.163382 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-42h7x" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="registry-server" probeResult="failure" output=< Nov 22 08:22:10 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:22:10 crc kubenswrapper[4853]: > Nov 22 08:22:19 crc kubenswrapper[4853]: I1122 08:22:19.161709 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:19 crc kubenswrapper[4853]: I1122 08:22:19.211384 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:19 crc kubenswrapper[4853]: I1122 08:22:19.977567 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.172934 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-42h7x" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="registry-server" containerID="cri-o://39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d" gracePeriod=2 Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.667628 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.848101 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgnjs\" (UniqueName: \"kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs\") pod \"357d570b-32db-4a2a-8424-17e15ee2b7a5\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.848855 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content\") pod \"357d570b-32db-4a2a-8424-17e15ee2b7a5\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.849973 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities\") pod \"357d570b-32db-4a2a-8424-17e15ee2b7a5\" (UID: \"357d570b-32db-4a2a-8424-17e15ee2b7a5\") " Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.851189 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities" (OuterVolumeSpecName: "utilities") pod "357d570b-32db-4a2a-8424-17e15ee2b7a5" (UID: "357d570b-32db-4a2a-8424-17e15ee2b7a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.851694 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.864820 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs" (OuterVolumeSpecName: "kube-api-access-vgnjs") pod "357d570b-32db-4a2a-8424-17e15ee2b7a5" (UID: "357d570b-32db-4a2a-8424-17e15ee2b7a5"). InnerVolumeSpecName "kube-api-access-vgnjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.946180 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "357d570b-32db-4a2a-8424-17e15ee2b7a5" (UID: "357d570b-32db-4a2a-8424-17e15ee2b7a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.955083 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgnjs\" (UniqueName: \"kubernetes.io/projected/357d570b-32db-4a2a-8424-17e15ee2b7a5-kube-api-access-vgnjs\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:21 crc kubenswrapper[4853]: I1122 08:22:21.955120 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/357d570b-32db-4a2a-8424-17e15ee2b7a5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.185193 4853 generic.go:334] "Generic (PLEG): container finished" podID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerID="39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d" exitCode=0 Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.185242 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerDied","Data":"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d"} Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.185276 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-42h7x" event={"ID":"357d570b-32db-4a2a-8424-17e15ee2b7a5","Type":"ContainerDied","Data":"3f00098f19f99549a860ab4dba84dcf1b306e3c3072ddbe33601f9b85c767205"} Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.185296 4853 scope.go:117] "RemoveContainer" containerID="39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.185295 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-42h7x" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.210363 4853 scope.go:117] "RemoveContainer" containerID="ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.231132 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.241684 4853 scope.go:117] "RemoveContainer" containerID="6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.242383 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-42h7x"] Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.331302 4853 scope.go:117] "RemoveContainer" containerID="39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d" Nov 22 08:22:22 crc kubenswrapper[4853]: E1122 08:22:22.332510 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d\": container with ID starting with 39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d not found: ID does not exist" containerID="39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.332569 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d"} err="failed to get container status \"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d\": rpc error: code = NotFound desc = could not find container \"39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d\": container with ID starting with 39da7d0e2d750b5554b04ac90fa3f1ec8a1b32bea8789af3c40da64a8b70a64d not found: ID does not exist" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.332606 4853 scope.go:117] "RemoveContainer" containerID="ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb" Nov 22 08:22:22 crc kubenswrapper[4853]: E1122 08:22:22.333219 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb\": container with ID starting with ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb not found: ID does not exist" containerID="ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.333291 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb"} err="failed to get container status \"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb\": rpc error: code = NotFound desc = could not find container \"ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb\": container with ID starting with ddc9500c0fc2e409ea49dfc7a9e9aea7f2d97a2c2747a0b707763295841c4dcb not found: ID does not exist" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.333331 4853 scope.go:117] "RemoveContainer" containerID="6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef" Nov 22 08:22:22 crc kubenswrapper[4853]: E1122 08:22:22.333730 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef\": container with ID starting with 6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef not found: ID does not exist" containerID="6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef" Nov 22 08:22:22 crc kubenswrapper[4853]: I1122 08:22:22.333778 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef"} err="failed to get container status \"6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef\": rpc error: code = NotFound desc = could not find container \"6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef\": container with ID starting with 6bc3e5c231300ef02d07053578a2fb1232bc1bf9f3824f98062cef57a0a191ef not found: ID does not exist" Nov 22 08:22:23 crc kubenswrapper[4853]: I1122 08:22:23.770419 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" path="/var/lib/kubelet/pods/357d570b-32db-4a2a-8424-17e15ee2b7a5/volumes" Nov 22 08:22:31 crc kubenswrapper[4853]: I1122 08:22:31.296968 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:22:31 crc kubenswrapper[4853]: I1122 08:22:31.297427 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.149993 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:38 crc kubenswrapper[4853]: E1122 08:22:38.151356 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="registry-server" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.151375 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="registry-server" Nov 22 08:22:38 crc kubenswrapper[4853]: E1122 08:22:38.151449 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="extract-content" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.151459 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="extract-content" Nov 22 08:22:38 crc kubenswrapper[4853]: E1122 08:22:38.151480 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="extract-utilities" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.151488 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="extract-utilities" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.151783 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="357d570b-32db-4a2a-8424-17e15ee2b7a5" containerName="registry-server" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.153998 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.174251 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.265112 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.265180 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.265250 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8mv5\" (UniqueName: \"kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.367047 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8mv5\" (UniqueName: \"kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.367259 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.367320 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.367729 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.367802 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.389206 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8mv5\" (UniqueName: \"kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5\") pod \"certified-operators-h74np\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:38 crc kubenswrapper[4853]: I1122 08:22:38.486721 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:39 crc kubenswrapper[4853]: I1122 08:22:39.081235 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:39 crc kubenswrapper[4853]: I1122 08:22:39.360215 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerID="b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa" exitCode=0 Nov 22 08:22:39 crc kubenswrapper[4853]: I1122 08:22:39.360269 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerDied","Data":"b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa"} Nov 22 08:22:39 crc kubenswrapper[4853]: I1122 08:22:39.360509 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerStarted","Data":"fe8eb34c408540f13cda0a35c0125fedfced71c76bc7fec90ce1bc7bc8ff091b"} Nov 22 08:22:39 crc kubenswrapper[4853]: I1122 08:22:39.362298 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.547194 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.549852 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.562877 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.724205 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.724627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.724784 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb922\" (UniqueName: \"kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.826910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.827013 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb922\" (UniqueName: \"kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.827291 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.827490 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.827816 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.852673 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb922\" (UniqueName: \"kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922\") pod \"community-operators-446dl\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:40 crc kubenswrapper[4853]: I1122 08:22:40.872502 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:41 crc kubenswrapper[4853]: I1122 08:22:41.384722 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerStarted","Data":"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b"} Nov 22 08:22:41 crc kubenswrapper[4853]: I1122 08:22:41.495793 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:41 crc kubenswrapper[4853]: W1122 08:22:41.500188 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15095a51_1248_4717_b511_3b0e6b848c51.slice/crio-dd0490d8cb9a5e5e8f1c32b0752058d9aa85e7f2810baf9fddc12dada8fbe495 WatchSource:0}: Error finding container dd0490d8cb9a5e5e8f1c32b0752058d9aa85e7f2810baf9fddc12dada8fbe495: Status 404 returned error can't find the container with id dd0490d8cb9a5e5e8f1c32b0752058d9aa85e7f2810baf9fddc12dada8fbe495 Nov 22 08:22:42 crc kubenswrapper[4853]: I1122 08:22:42.399910 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerID="4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b" exitCode=0 Nov 22 08:22:42 crc kubenswrapper[4853]: I1122 08:22:42.399959 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerDied","Data":"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b"} Nov 22 08:22:42 crc kubenswrapper[4853]: I1122 08:22:42.402375 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerStarted","Data":"dd0490d8cb9a5e5e8f1c32b0752058d9aa85e7f2810baf9fddc12dada8fbe495"} Nov 22 08:22:43 crc kubenswrapper[4853]: I1122 08:22:43.414639 4853 generic.go:334] "Generic (PLEG): container finished" podID="15095a51-1248-4717-b511-3b0e6b848c51" containerID="214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088" exitCode=0 Nov 22 08:22:43 crc kubenswrapper[4853]: I1122 08:22:43.414761 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerDied","Data":"214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088"} Nov 22 08:22:43 crc kubenswrapper[4853]: I1122 08:22:43.418562 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerStarted","Data":"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917"} Nov 22 08:22:43 crc kubenswrapper[4853]: I1122 08:22:43.453491 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h74np" podStartSLOduration=1.804560173 podStartE2EDuration="5.45347132s" podCreationTimestamp="2025-11-22 08:22:38 +0000 UTC" firstStartedPulling="2025-11-22 08:22:39.362006948 +0000 UTC m=+4358.202629564" lastFinishedPulling="2025-11-22 08:22:43.010918085 +0000 UTC m=+4361.851540711" observedRunningTime="2025-11-22 08:22:43.446822511 +0000 UTC m=+4362.287445157" watchObservedRunningTime="2025-11-22 08:22:43.45347132 +0000 UTC m=+4362.294093946" Nov 22 08:22:44 crc kubenswrapper[4853]: I1122 08:22:44.433357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerStarted","Data":"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259"} Nov 22 08:22:46 crc kubenswrapper[4853]: I1122 08:22:46.458371 4853 generic.go:334] "Generic (PLEG): container finished" podID="15095a51-1248-4717-b511-3b0e6b848c51" containerID="2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259" exitCode=0 Nov 22 08:22:46 crc kubenswrapper[4853]: I1122 08:22:46.458463 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerDied","Data":"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259"} Nov 22 08:22:47 crc kubenswrapper[4853]: I1122 08:22:47.473592 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerStarted","Data":"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de"} Nov 22 08:22:47 crc kubenswrapper[4853]: I1122 08:22:47.503147 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-446dl" podStartSLOduration=3.845292914 podStartE2EDuration="7.503123603s" podCreationTimestamp="2025-11-22 08:22:40 +0000 UTC" firstStartedPulling="2025-11-22 08:22:43.416808249 +0000 UTC m=+4362.257430875" lastFinishedPulling="2025-11-22 08:22:47.074638908 +0000 UTC m=+4365.915261564" observedRunningTime="2025-11-22 08:22:47.493641576 +0000 UTC m=+4366.334264212" watchObservedRunningTime="2025-11-22 08:22:47.503123603 +0000 UTC m=+4366.343746229" Nov 22 08:22:48 crc kubenswrapper[4853]: I1122 08:22:48.488089 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:48 crc kubenswrapper[4853]: I1122 08:22:48.488160 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:48 crc kubenswrapper[4853]: I1122 08:22:48.541101 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:49 crc kubenswrapper[4853]: I1122 08:22:49.554024 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:50 crc kubenswrapper[4853]: I1122 08:22:50.537121 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:50 crc kubenswrapper[4853]: I1122 08:22:50.873189 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:50 crc kubenswrapper[4853]: I1122 08:22:50.873320 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:50 crc kubenswrapper[4853]: I1122 08:22:50.922144 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:51 crc kubenswrapper[4853]: I1122 08:22:51.513092 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h74np" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="registry-server" containerID="cri-o://23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917" gracePeriod=2 Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.017051 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.096672 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8mv5\" (UniqueName: \"kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5\") pod \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.096839 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content\") pod \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.146722 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6edd5b9-cef7-42a4-9de8-fe8fbd411082" (UID: "c6edd5b9-cef7-42a4-9de8-fe8fbd411082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.198082 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities\") pod \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\" (UID: \"c6edd5b9-cef7-42a4-9de8-fe8fbd411082\") " Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.198691 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities" (OuterVolumeSpecName: "utilities") pod "c6edd5b9-cef7-42a4-9de8-fe8fbd411082" (UID: "c6edd5b9-cef7-42a4-9de8-fe8fbd411082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.199242 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.199270 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.527552 4853 generic.go:334] "Generic (PLEG): container finished" podID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerID="23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917" exitCode=0 Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.527924 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h74np" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.528118 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerDied","Data":"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917"} Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.528712 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h74np" event={"ID":"c6edd5b9-cef7-42a4-9de8-fe8fbd411082","Type":"ContainerDied","Data":"fe8eb34c408540f13cda0a35c0125fedfced71c76bc7fec90ce1bc7bc8ff091b"} Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.528829 4853 scope.go:117] "RemoveContainer" containerID="23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.558588 4853 scope.go:117] "RemoveContainer" containerID="4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.758193 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5" (OuterVolumeSpecName: "kube-api-access-v8mv5") pod "c6edd5b9-cef7-42a4-9de8-fe8fbd411082" (UID: "c6edd5b9-cef7-42a4-9de8-fe8fbd411082"). InnerVolumeSpecName "kube-api-access-v8mv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.771335 4853 scope.go:117] "RemoveContainer" containerID="b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.808677 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.813053 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8mv5\" (UniqueName: \"kubernetes.io/projected/c6edd5b9-cef7-42a4-9de8-fe8fbd411082-kube-api-access-v8mv5\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.923484 4853 scope.go:117] "RemoveContainer" containerID="23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917" Nov 22 08:22:52 crc kubenswrapper[4853]: E1122 08:22:52.923954 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917\": container with ID starting with 23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917 not found: ID does not exist" containerID="23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.923989 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917"} err="failed to get container status \"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917\": rpc error: code = NotFound desc = could not find container \"23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917\": container with ID starting with 23a6f58a0565f6e1b1fb6c23d8523d4b6b76953be6513dda125ad0c288867917 not found: ID does not exist" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.924012 4853 scope.go:117] "RemoveContainer" containerID="4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b" Nov 22 08:22:52 crc kubenswrapper[4853]: E1122 08:22:52.924251 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b\": container with ID starting with 4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b not found: ID does not exist" containerID="4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.924269 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b"} err="failed to get container status \"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b\": rpc error: code = NotFound desc = could not find container \"4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b\": container with ID starting with 4fc15ec1682dd9af76c02dba2223a2b1b6ccad53868eb905e2864fcb6d342c1b not found: ID does not exist" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.924281 4853 scope.go:117] "RemoveContainer" containerID="b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa" Nov 22 08:22:52 crc kubenswrapper[4853]: E1122 08:22:52.924648 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa\": container with ID starting with b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa not found: ID does not exist" containerID="b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.924672 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa"} err="failed to get container status \"b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa\": rpc error: code = NotFound desc = could not find container \"b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa\": container with ID starting with b6153fbab7f7ade78305116ae1351cea817e67c3a70765393d8e8b4119bf14aa not found: ID does not exist" Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.979793 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:52 crc kubenswrapper[4853]: I1122 08:22:52.989832 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h74np"] Nov 22 08:22:53 crc kubenswrapper[4853]: I1122 08:22:53.343987 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:53 crc kubenswrapper[4853]: I1122 08:22:53.762343 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" path="/var/lib/kubelet/pods/c6edd5b9-cef7-42a4-9de8-fe8fbd411082/volumes" Nov 22 08:22:54 crc kubenswrapper[4853]: I1122 08:22:54.554660 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-446dl" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="registry-server" containerID="cri-o://6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de" gracePeriod=2 Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.455680 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.565860 4853 generic.go:334] "Generic (PLEG): container finished" podID="15095a51-1248-4717-b511-3b0e6b848c51" containerID="6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de" exitCode=0 Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.565902 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-446dl" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.565901 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerDied","Data":"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de"} Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.566878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-446dl" event={"ID":"15095a51-1248-4717-b511-3b0e6b848c51","Type":"ContainerDied","Data":"dd0490d8cb9a5e5e8f1c32b0752058d9aa85e7f2810baf9fddc12dada8fbe495"} Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.566922 4853 scope.go:117] "RemoveContainer" containerID="6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.574193 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities\") pod \"15095a51-1248-4717-b511-3b0e6b848c51\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.574245 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb922\" (UniqueName: \"kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922\") pod \"15095a51-1248-4717-b511-3b0e6b848c51\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.574469 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content\") pod \"15095a51-1248-4717-b511-3b0e6b848c51\" (UID: \"15095a51-1248-4717-b511-3b0e6b848c51\") " Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.575221 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities" (OuterVolumeSpecName: "utilities") pod "15095a51-1248-4717-b511-3b0e6b848c51" (UID: "15095a51-1248-4717-b511-3b0e6b848c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.580267 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922" (OuterVolumeSpecName: "kube-api-access-cb922") pod "15095a51-1248-4717-b511-3b0e6b848c51" (UID: "15095a51-1248-4717-b511-3b0e6b848c51"). InnerVolumeSpecName "kube-api-access-cb922". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.591440 4853 scope.go:117] "RemoveContainer" containerID="2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.627282 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15095a51-1248-4717-b511-3b0e6b848c51" (UID: "15095a51-1248-4717-b511-3b0e6b848c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.669152 4853 scope.go:117] "RemoveContainer" containerID="214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.677694 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.677727 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb922\" (UniqueName: \"kubernetes.io/projected/15095a51-1248-4717-b511-3b0e6b848c51-kube-api-access-cb922\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.677737 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15095a51-1248-4717-b511-3b0e6b848c51-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.699730 4853 scope.go:117] "RemoveContainer" containerID="6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de" Nov 22 08:22:55 crc kubenswrapper[4853]: E1122 08:22:55.700269 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de\": container with ID starting with 6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de not found: ID does not exist" containerID="6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.700321 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de"} err="failed to get container status \"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de\": rpc error: code = NotFound desc = could not find container \"6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de\": container with ID starting with 6377150eb3c5f2e4fef4d800aaba5ad3150afba6473a109469a3c90c88ad80de not found: ID does not exist" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.700352 4853 scope.go:117] "RemoveContainer" containerID="2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259" Nov 22 08:22:55 crc kubenswrapper[4853]: E1122 08:22:55.700775 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259\": container with ID starting with 2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259 not found: ID does not exist" containerID="2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.700818 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259"} err="failed to get container status \"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259\": rpc error: code = NotFound desc = could not find container \"2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259\": container with ID starting with 2c7ae5e85709b4432fb7057cad726a757e39f246f0e2357a8aa8924c096c4259 not found: ID does not exist" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.700843 4853 scope.go:117] "RemoveContainer" containerID="214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088" Nov 22 08:22:55 crc kubenswrapper[4853]: E1122 08:22:55.701119 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088\": container with ID starting with 214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088 not found: ID does not exist" containerID="214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.701152 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088"} err="failed to get container status \"214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088\": rpc error: code = NotFound desc = could not find container \"214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088\": container with ID starting with 214a8b0905c90f2040fb9a7c568ae57882a8f26c69b428815c96fc8a2f9d6088 not found: ID does not exist" Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.890278 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:55 crc kubenswrapper[4853]: I1122 08:22:55.900880 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-446dl"] Nov 22 08:22:57 crc kubenswrapper[4853]: I1122 08:22:57.764146 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15095a51-1248-4717-b511-3b0e6b848c51" path="/var/lib/kubelet/pods/15095a51-1248-4717-b511-3b0e6b848c51/volumes" Nov 22 08:23:01 crc kubenswrapper[4853]: I1122 08:23:01.297482 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:23:01 crc kubenswrapper[4853]: I1122 08:23:01.297777 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:23:18 crc kubenswrapper[4853]: E1122 08:23:18.297389 4853 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.251:39092->38.102.83.251:37237: write tcp 38.102.83.251:39092->38.102.83.251:37237: write: broken pipe Nov 22 08:23:31 crc kubenswrapper[4853]: I1122 08:23:31.298022 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:23:31 crc kubenswrapper[4853]: I1122 08:23:31.298698 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:23:31 crc kubenswrapper[4853]: I1122 08:23:31.298773 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:23:31 crc kubenswrapper[4853]: I1122 08:23:31.299660 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:23:31 crc kubenswrapper[4853]: I1122 08:23:31.299729 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a" gracePeriod=600 Nov 22 08:23:32 crc kubenswrapper[4853]: I1122 08:23:32.141278 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a" exitCode=0 Nov 22 08:23:32 crc kubenswrapper[4853]: I1122 08:23:32.141356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a"} Nov 22 08:23:32 crc kubenswrapper[4853]: I1122 08:23:32.141645 4853 scope.go:117] "RemoveContainer" containerID="43c489ef7c014ec88f2022965726354fdac660d1573666e320842974a56f8ca4" Nov 22 08:23:33 crc kubenswrapper[4853]: I1122 08:23:33.156394 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08"} Nov 22 08:26:01 crc kubenswrapper[4853]: I1122 08:26:01.298129 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:26:01 crc kubenswrapper[4853]: I1122 08:26:01.298804 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:26:31 crc kubenswrapper[4853]: I1122 08:26:31.297709 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:26:31 crc kubenswrapper[4853]: I1122 08:26:31.298250 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.297989 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.298566 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.298626 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.299466 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.299570 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" gracePeriod=600 Nov 22 08:27:01 crc kubenswrapper[4853]: E1122 08:27:01.429555 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.480401 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" exitCode=0 Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.480455 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08"} Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.480507 4853 scope.go:117] "RemoveContainer" containerID="37e23743046a159157cd74e860dd32bd690e8580691cc4451c8549e96b87351a" Nov 22 08:27:01 crc kubenswrapper[4853]: I1122 08:27:01.481516 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:27:01 crc kubenswrapper[4853]: E1122 08:27:01.482104 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:27:09 crc kubenswrapper[4853]: I1122 08:27:09.617044 4853 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5bb8bb4577-rspn5 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:27:09 crc kubenswrapper[4853]: I1122 08:27:09.617656 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" podUID="50b94c6e-d5b7-4720-af4c-8922035ca146" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:27:09 crc kubenswrapper[4853]: I1122 08:27:09.622416 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78bd47f458-mxwrm" podUID="05971821-7368-4352-8955-bd9432958c9b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:27:16 crc kubenswrapper[4853]: I1122 08:27:16.747478 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:27:16 crc kubenswrapper[4853]: E1122 08:27:16.748579 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:27:31 crc kubenswrapper[4853]: I1122 08:27:31.748160 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:27:31 crc kubenswrapper[4853]: E1122 08:27:31.749400 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:27:45 crc kubenswrapper[4853]: I1122 08:27:45.755321 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:27:45 crc kubenswrapper[4853]: E1122 08:27:45.756097 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.664606 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.665736 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="extract-utilities" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.665770 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="extract-utilities" Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.665796 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="extract-utilities" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.665886 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="extract-utilities" Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.665907 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="extract-content" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.665952 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="extract-content" Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.665970 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.665977 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.666016 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.666023 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: E1122 08:27:51.666038 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="extract-content" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.666045 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="extract-content" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.666329 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="15095a51-1248-4717-b511-3b0e6b848c51" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.666349 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6edd5b9-cef7-42a4-9de8-fe8fbd411082" containerName="registry-server" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.668359 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.675515 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.763262 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t4mr\" (UniqueName: \"kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.763368 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.763592 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.865421 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.865593 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.865679 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t4mr\" (UniqueName: \"kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.866045 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.866116 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.887900 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t4mr\" (UniqueName: \"kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr\") pod \"redhat-marketplace-v8f2k\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:51 crc kubenswrapper[4853]: I1122 08:27:51.998485 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:27:52 crc kubenswrapper[4853]: I1122 08:27:52.454050 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:27:52 crc kubenswrapper[4853]: I1122 08:27:52.561009 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerStarted","Data":"4fc4fc894d9c4dc3c20b05feb98feb0dfe9950a36e973edbdd1b2e8dcae794a5"} Nov 22 08:27:53 crc kubenswrapper[4853]: I1122 08:27:53.571531 4853 generic.go:334] "Generic (PLEG): container finished" podID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerID="7fc7b5d23ae2633dee6772334c27ff8962b34dafd8df67a61b900aae9bb979b6" exitCode=0 Nov 22 08:27:53 crc kubenswrapper[4853]: I1122 08:27:53.571715 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerDied","Data":"7fc7b5d23ae2633dee6772334c27ff8962b34dafd8df67a61b900aae9bb979b6"} Nov 22 08:27:53 crc kubenswrapper[4853]: I1122 08:27:53.574298 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:27:54 crc kubenswrapper[4853]: I1122 08:27:54.583646 4853 generic.go:334] "Generic (PLEG): container finished" podID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerID="c7c27033cffdf2115d47cfa9023220b253f0766f75ea07e7e4a205223007f6d2" exitCode=0 Nov 22 08:27:54 crc kubenswrapper[4853]: I1122 08:27:54.583741 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerDied","Data":"c7c27033cffdf2115d47cfa9023220b253f0766f75ea07e7e4a205223007f6d2"} Nov 22 08:27:55 crc kubenswrapper[4853]: I1122 08:27:55.603631 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerStarted","Data":"b9551b2dbb8c7f4f80b620b31cd3a995c58fee4d89145cd8383f7998de0736eb"} Nov 22 08:27:55 crc kubenswrapper[4853]: I1122 08:27:55.634240 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8f2k" podStartSLOduration=3.236857656 podStartE2EDuration="4.634204366s" podCreationTimestamp="2025-11-22 08:27:51 +0000 UTC" firstStartedPulling="2025-11-22 08:27:53.574011194 +0000 UTC m=+4672.414633820" lastFinishedPulling="2025-11-22 08:27:54.971357904 +0000 UTC m=+4673.811980530" observedRunningTime="2025-11-22 08:27:55.627062972 +0000 UTC m=+4674.467685598" watchObservedRunningTime="2025-11-22 08:27:55.634204366 +0000 UTC m=+4674.474826992" Nov 22 08:27:58 crc kubenswrapper[4853]: I1122 08:27:58.748266 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:27:58 crc kubenswrapper[4853]: E1122 08:27:58.749559 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:28:02 crc kubenswrapper[4853]: I1122 08:28:01.999372 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:02 crc kubenswrapper[4853]: I1122 08:28:02.000074 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:02 crc kubenswrapper[4853]: I1122 08:28:02.054567 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:02 crc kubenswrapper[4853]: I1122 08:28:02.725907 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:02 crc kubenswrapper[4853]: I1122 08:28:02.799165 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:28:04 crc kubenswrapper[4853]: I1122 08:28:04.698600 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v8f2k" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="registry-server" containerID="cri-o://b9551b2dbb8c7f4f80b620b31cd3a995c58fee4d89145cd8383f7998de0736eb" gracePeriod=2 Nov 22 08:28:05 crc kubenswrapper[4853]: I1122 08:28:05.710682 4853 generic.go:334] "Generic (PLEG): container finished" podID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerID="b9551b2dbb8c7f4f80b620b31cd3a995c58fee4d89145cd8383f7998de0736eb" exitCode=0 Nov 22 08:28:05 crc kubenswrapper[4853]: I1122 08:28:05.710800 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerDied","Data":"b9551b2dbb8c7f4f80b620b31cd3a995c58fee4d89145cd8383f7998de0736eb"} Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.374637 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.537702 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t4mr\" (UniqueName: \"kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr\") pod \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.537948 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content\") pod \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.538127 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities\") pod \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\" (UID: \"b41b2fb6-8872-436c-8ccf-bfc93a2e0919\") " Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.539531 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities" (OuterVolumeSpecName: "utilities") pod "b41b2fb6-8872-436c-8ccf-bfc93a2e0919" (UID: "b41b2fb6-8872-436c-8ccf-bfc93a2e0919"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.545420 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr" (OuterVolumeSpecName: "kube-api-access-7t4mr") pod "b41b2fb6-8872-436c-8ccf-bfc93a2e0919" (UID: "b41b2fb6-8872-436c-8ccf-bfc93a2e0919"). InnerVolumeSpecName "kube-api-access-7t4mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.555975 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b41b2fb6-8872-436c-8ccf-bfc93a2e0919" (UID: "b41b2fb6-8872-436c-8ccf-bfc93a2e0919"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.641968 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t4mr\" (UniqueName: \"kubernetes.io/projected/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-kube-api-access-7t4mr\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.642003 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.642013 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b41b2fb6-8872-436c-8ccf-bfc93a2e0919-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.723795 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8f2k" event={"ID":"b41b2fb6-8872-436c-8ccf-bfc93a2e0919","Type":"ContainerDied","Data":"4fc4fc894d9c4dc3c20b05feb98feb0dfe9950a36e973edbdd1b2e8dcae794a5"} Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.723852 4853 scope.go:117] "RemoveContainer" containerID="b9551b2dbb8c7f4f80b620b31cd3a995c58fee4d89145cd8383f7998de0736eb" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.723870 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8f2k" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.750143 4853 scope.go:117] "RemoveContainer" containerID="c7c27033cffdf2115d47cfa9023220b253f0766f75ea07e7e4a205223007f6d2" Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.763314 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.777186 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8f2k"] Nov 22 08:28:06 crc kubenswrapper[4853]: I1122 08:28:06.778293 4853 scope.go:117] "RemoveContainer" containerID="7fc7b5d23ae2633dee6772334c27ff8962b34dafd8df67a61b900aae9bb979b6" Nov 22 08:28:07 crc kubenswrapper[4853]: I1122 08:28:07.773596 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" path="/var/lib/kubelet/pods/b41b2fb6-8872-436c-8ccf-bfc93a2e0919/volumes" Nov 22 08:28:09 crc kubenswrapper[4853]: I1122 08:28:09.748018 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:28:09 crc kubenswrapper[4853]: E1122 08:28:09.748901 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:28:23 crc kubenswrapper[4853]: I1122 08:28:23.748705 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:28:23 crc kubenswrapper[4853]: E1122 08:28:23.749703 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:28:34 crc kubenswrapper[4853]: I1122 08:28:34.748813 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:28:34 crc kubenswrapper[4853]: E1122 08:28:34.749526 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:28:47 crc kubenswrapper[4853]: I1122 08:28:47.748934 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:28:47 crc kubenswrapper[4853]: E1122 08:28:47.749959 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:29:00 crc kubenswrapper[4853]: I1122 08:29:00.748489 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:29:00 crc kubenswrapper[4853]: E1122 08:29:00.749389 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:29:11 crc kubenswrapper[4853]: I1122 08:29:11.748375 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:29:11 crc kubenswrapper[4853]: E1122 08:29:11.749284 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:29:24 crc kubenswrapper[4853]: I1122 08:29:24.748386 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:29:24 crc kubenswrapper[4853]: E1122 08:29:24.749151 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:29:38 crc kubenswrapper[4853]: I1122 08:29:38.747832 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:29:38 crc kubenswrapper[4853]: E1122 08:29:38.748645 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:29:53 crc kubenswrapper[4853]: I1122 08:29:53.748578 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:29:53 crc kubenswrapper[4853]: E1122 08:29:53.749424 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.157228 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg"] Nov 22 08:30:00 crc kubenswrapper[4853]: E1122 08:30:00.158837 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.159141 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4853]: E1122 08:30:00.159198 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="extract-utilities" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.159209 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="extract-utilities" Nov 22 08:30:00 crc kubenswrapper[4853]: E1122 08:30:00.159228 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="extract-content" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.159237 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="extract-content" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.159561 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41b2fb6-8872-436c-8ccf-bfc93a2e0919" containerName="registry-server" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.161100 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.163581 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.164521 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.169437 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg"] Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.218625 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx6z\" (UniqueName: \"kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.218825 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.219236 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.321818 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.321939 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mx6z\" (UniqueName: \"kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.322060 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.323320 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.330113 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.342800 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mx6z\" (UniqueName: \"kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z\") pod \"collect-profiles-29396670-s6dwg\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.486303 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.945596 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg"] Nov 22 08:30:00 crc kubenswrapper[4853]: I1122 08:30:00.985439 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" event={"ID":"7f5019d7-51a0-456b-8e17-4ce585ac6bb9","Type":"ContainerStarted","Data":"b8aff7ed45e895dc58585b13d517d043b4618c22662ea9c937d2c94ff45ef476"} Nov 22 08:30:01 crc kubenswrapper[4853]: I1122 08:30:01.997729 4853 generic.go:334] "Generic (PLEG): container finished" podID="7f5019d7-51a0-456b-8e17-4ce585ac6bb9" containerID="c021c2a3e9ca9f9178d5f98ac78f9c18729eff1bca26f9d4f6ac4e0e7162d7a0" exitCode=0 Nov 22 08:30:01 crc kubenswrapper[4853]: I1122 08:30:01.998252 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" event={"ID":"7f5019d7-51a0-456b-8e17-4ce585ac6bb9","Type":"ContainerDied","Data":"c021c2a3e9ca9f9178d5f98ac78f9c18729eff1bca26f9d4f6ac4e0e7162d7a0"} Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.408116 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.508972 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume\") pod \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.509239 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume\") pod \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.509411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mx6z\" (UniqueName: \"kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z\") pod \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\" (UID: \"7f5019d7-51a0-456b-8e17-4ce585ac6bb9\") " Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.510176 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f5019d7-51a0-456b-8e17-4ce585ac6bb9" (UID: "7f5019d7-51a0-456b-8e17-4ce585ac6bb9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.516424 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z" (OuterVolumeSpecName: "kube-api-access-7mx6z") pod "7f5019d7-51a0-456b-8e17-4ce585ac6bb9" (UID: "7f5019d7-51a0-456b-8e17-4ce585ac6bb9"). InnerVolumeSpecName "kube-api-access-7mx6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.516922 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f5019d7-51a0-456b-8e17-4ce585ac6bb9" (UID: "7f5019d7-51a0-456b-8e17-4ce585ac6bb9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.611809 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.611871 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4853]: I1122 08:30:03.611882 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mx6z\" (UniqueName: \"kubernetes.io/projected/7f5019d7-51a0-456b-8e17-4ce585ac6bb9-kube-api-access-7mx6z\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:04 crc kubenswrapper[4853]: I1122 08:30:04.026778 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" event={"ID":"7f5019d7-51a0-456b-8e17-4ce585ac6bb9","Type":"ContainerDied","Data":"b8aff7ed45e895dc58585b13d517d043b4618c22662ea9c937d2c94ff45ef476"} Nov 22 08:30:04 crc kubenswrapper[4853]: I1122 08:30:04.026836 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8aff7ed45e895dc58585b13d517d043b4618c22662ea9c937d2c94ff45ef476" Nov 22 08:30:04 crc kubenswrapper[4853]: I1122 08:30:04.026849 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg" Nov 22 08:30:04 crc kubenswrapper[4853]: I1122 08:30:04.484715 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv"] Nov 22 08:30:04 crc kubenswrapper[4853]: I1122 08:30:04.498825 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-f4jwv"] Nov 22 08:30:05 crc kubenswrapper[4853]: I1122 08:30:05.763885 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ff1621-679a-42ea-af86-4101058daa35" path="/var/lib/kubelet/pods/16ff1621-679a-42ea-af86-4101058daa35/volumes" Nov 22 08:30:08 crc kubenswrapper[4853]: I1122 08:30:08.747734 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:30:08 crc kubenswrapper[4853]: E1122 08:30:08.748690 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:30:21 crc kubenswrapper[4853]: I1122 08:30:21.749052 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:30:21 crc kubenswrapper[4853]: E1122 08:30:21.751390 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:30:34 crc kubenswrapper[4853]: I1122 08:30:34.748140 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:30:34 crc kubenswrapper[4853]: E1122 08:30:34.749025 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:30:45 crc kubenswrapper[4853]: I1122 08:30:45.764157 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:30:45 crc kubenswrapper[4853]: E1122 08:30:45.765242 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:30:59 crc kubenswrapper[4853]: I1122 08:30:59.748983 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:30:59 crc kubenswrapper[4853]: E1122 08:30:59.750462 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:31:03 crc kubenswrapper[4853]: I1122 08:31:03.608575 4853 scope.go:117] "RemoveContainer" containerID="6a05e5f54293086212872f0f9acd7a9f5ecbab972ff347fe1f4bbb1ae303b9fa" Nov 22 08:31:14 crc kubenswrapper[4853]: I1122 08:31:14.747851 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:31:14 crc kubenswrapper[4853]: E1122 08:31:14.748639 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:31:28 crc kubenswrapper[4853]: I1122 08:31:28.748902 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:31:28 crc kubenswrapper[4853]: E1122 08:31:28.749801 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:31:41 crc kubenswrapper[4853]: I1122 08:31:41.748069 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:31:41 crc kubenswrapper[4853]: E1122 08:31:41.749113 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:31:52 crc kubenswrapper[4853]: I1122 08:31:52.749237 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:31:52 crc kubenswrapper[4853]: E1122 08:31:52.751414 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:32:03 crc kubenswrapper[4853]: I1122 08:32:03.748475 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.175351 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:04 crc kubenswrapper[4853]: E1122 08:32:04.176464 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5019d7-51a0-456b-8e17-4ce585ac6bb9" containerName="collect-profiles" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.176502 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5019d7-51a0-456b-8e17-4ce585ac6bb9" containerName="collect-profiles" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.176964 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5019d7-51a0-456b-8e17-4ce585ac6bb9" containerName="collect-profiles" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.188220 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.194531 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.285363 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7gh\" (UniqueName: \"kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.285566 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.285596 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.387266 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.387333 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.387424 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7gh\" (UniqueName: \"kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.387859 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.387909 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.405405 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d"} Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.408603 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7gh\" (UniqueName: \"kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh\") pod \"redhat-operators-gvjt6\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:04 crc kubenswrapper[4853]: I1122 08:32:04.507548 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:05 crc kubenswrapper[4853]: I1122 08:32:05.059362 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:05 crc kubenswrapper[4853]: I1122 08:32:05.422456 4853 generic.go:334] "Generic (PLEG): container finished" podID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerID="e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc" exitCode=0 Nov 22 08:32:05 crc kubenswrapper[4853]: I1122 08:32:05.422942 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerDied","Data":"e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc"} Nov 22 08:32:05 crc kubenswrapper[4853]: I1122 08:32:05.422975 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerStarted","Data":"6535a0954aeb8c194a3ea3467c5a143c5d317dcee74b59593a1b10d32c5f78d4"} Nov 22 08:32:07 crc kubenswrapper[4853]: I1122 08:32:07.446890 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerStarted","Data":"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe"} Nov 22 08:32:10 crc kubenswrapper[4853]: I1122 08:32:10.483806 4853 generic.go:334] "Generic (PLEG): container finished" podID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerID="0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe" exitCode=0 Nov 22 08:32:10 crc kubenswrapper[4853]: I1122 08:32:10.483876 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerDied","Data":"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe"} Nov 22 08:32:11 crc kubenswrapper[4853]: I1122 08:32:11.498064 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerStarted","Data":"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd"} Nov 22 08:32:11 crc kubenswrapper[4853]: I1122 08:32:11.529121 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gvjt6" podStartSLOduration=2.090246492 podStartE2EDuration="7.529055578s" podCreationTimestamp="2025-11-22 08:32:04 +0000 UTC" firstStartedPulling="2025-11-22 08:32:05.428621709 +0000 UTC m=+4924.269244335" lastFinishedPulling="2025-11-22 08:32:10.867430795 +0000 UTC m=+4929.708053421" observedRunningTime="2025-11-22 08:32:11.521658727 +0000 UTC m=+4930.362281393" watchObservedRunningTime="2025-11-22 08:32:11.529055578 +0000 UTC m=+4930.369678214" Nov 22 08:32:14 crc kubenswrapper[4853]: I1122 08:32:14.507810 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:14 crc kubenswrapper[4853]: I1122 08:32:14.508213 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:15 crc kubenswrapper[4853]: I1122 08:32:15.571312 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gvjt6" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="registry-server" probeResult="failure" output=< Nov 22 08:32:15 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:32:15 crc kubenswrapper[4853]: > Nov 22 08:32:24 crc kubenswrapper[4853]: I1122 08:32:24.565608 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:24 crc kubenswrapper[4853]: I1122 08:32:24.616815 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:24 crc kubenswrapper[4853]: I1122 08:32:24.813164 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:25 crc kubenswrapper[4853]: I1122 08:32:25.675434 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gvjt6" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="registry-server" containerID="cri-o://64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd" gracePeriod=2 Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.314924 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.473822 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content\") pod \"47b41e21-db8e-40a0-8d1a-5513faab7690\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.474247 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl7gh\" (UniqueName: \"kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh\") pod \"47b41e21-db8e-40a0-8d1a-5513faab7690\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.474334 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities\") pod \"47b41e21-db8e-40a0-8d1a-5513faab7690\" (UID: \"47b41e21-db8e-40a0-8d1a-5513faab7690\") " Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.475182 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities" (OuterVolumeSpecName: "utilities") pod "47b41e21-db8e-40a0-8d1a-5513faab7690" (UID: "47b41e21-db8e-40a0-8d1a-5513faab7690"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.490934 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh" (OuterVolumeSpecName: "kube-api-access-tl7gh") pod "47b41e21-db8e-40a0-8d1a-5513faab7690" (UID: "47b41e21-db8e-40a0-8d1a-5513faab7690"). InnerVolumeSpecName "kube-api-access-tl7gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.554247 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47b41e21-db8e-40a0-8d1a-5513faab7690" (UID: "47b41e21-db8e-40a0-8d1a-5513faab7690"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.577103 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl7gh\" (UniqueName: \"kubernetes.io/projected/47b41e21-db8e-40a0-8d1a-5513faab7690-kube-api-access-tl7gh\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.577152 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.577165 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47b41e21-db8e-40a0-8d1a-5513faab7690-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.689587 4853 generic.go:334] "Generic (PLEG): container finished" podID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerID="64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd" exitCode=0 Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.689655 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerDied","Data":"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd"} Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.689688 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gvjt6" event={"ID":"47b41e21-db8e-40a0-8d1a-5513faab7690","Type":"ContainerDied","Data":"6535a0954aeb8c194a3ea3467c5a143c5d317dcee74b59593a1b10d32c5f78d4"} Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.689670 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gvjt6" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.689709 4853 scope.go:117] "RemoveContainer" containerID="64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.727166 4853 scope.go:117] "RemoveContainer" containerID="0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.730143 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.751403 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gvjt6"] Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.756218 4853 scope.go:117] "RemoveContainer" containerID="e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.815424 4853 scope.go:117] "RemoveContainer" containerID="64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd" Nov 22 08:32:26 crc kubenswrapper[4853]: E1122 08:32:26.815943 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd\": container with ID starting with 64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd not found: ID does not exist" containerID="64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.815991 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd"} err="failed to get container status \"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd\": rpc error: code = NotFound desc = could not find container \"64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd\": container with ID starting with 64d919052c0d9d10dac7b7a6434a7e62d7db20297d809b91fb613e2218bb9ebd not found: ID does not exist" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.816018 4853 scope.go:117] "RemoveContainer" containerID="0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe" Nov 22 08:32:26 crc kubenswrapper[4853]: E1122 08:32:26.816530 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe\": container with ID starting with 0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe not found: ID does not exist" containerID="0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.816573 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe"} err="failed to get container status \"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe\": rpc error: code = NotFound desc = could not find container \"0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe\": container with ID starting with 0deb78d7c39babaeda91c85d805d1c1cb674347674b301f1fd21b3b1088afebe not found: ID does not exist" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.816627 4853 scope.go:117] "RemoveContainer" containerID="e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc" Nov 22 08:32:26 crc kubenswrapper[4853]: E1122 08:32:26.817026 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc\": container with ID starting with e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc not found: ID does not exist" containerID="e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc" Nov 22 08:32:26 crc kubenswrapper[4853]: I1122 08:32:26.817051 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc"} err="failed to get container status \"e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc\": rpc error: code = NotFound desc = could not find container \"e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc\": container with ID starting with e38236d5dbe8cc44cc8bacaa4ca399447af7f0c5aa309b9c013170d1d44c68cc not found: ID does not exist" Nov 22 08:32:27 crc kubenswrapper[4853]: I1122 08:32:27.762130 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" path="/var/lib/kubelet/pods/47b41e21-db8e-40a0-8d1a-5513faab7690/volumes" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.387949 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:32:51 crc kubenswrapper[4853]: E1122 08:32:51.388953 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="extract-content" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.388970 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="extract-content" Nov 22 08:32:51 crc kubenswrapper[4853]: E1122 08:32:51.388990 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="extract-utilities" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.388996 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="extract-utilities" Nov 22 08:32:51 crc kubenswrapper[4853]: E1122 08:32:51.389013 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="registry-server" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.389021 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="registry-server" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.389308 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="47b41e21-db8e-40a0-8d1a-5513faab7690" containerName="registry-server" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.391146 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.399600 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.509957 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.510029 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.510130 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgvd\" (UniqueName: \"kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.613234 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.613321 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.613461 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgvd\" (UniqueName: \"kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.614069 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:51 crc kubenswrapper[4853]: I1122 08:32:51.614139 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:52 crc kubenswrapper[4853]: I1122 08:32:52.063384 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgvd\" (UniqueName: \"kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd\") pod \"certified-operators-j6bhv\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:52 crc kubenswrapper[4853]: I1122 08:32:52.320587 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:32:52 crc kubenswrapper[4853]: I1122 08:32:52.866058 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:32:53 crc kubenswrapper[4853]: I1122 08:32:53.031473 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerStarted","Data":"2789d3e6b52158d3e6753fec093c0e02e0d30e934b3db894b5af9005c6aeb4e4"} Nov 22 08:32:54 crc kubenswrapper[4853]: I1122 08:32:54.047079 4853 generic.go:334] "Generic (PLEG): container finished" podID="95cbec14-095a-4f81-a786-db7c2215a57e" containerID="bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb" exitCode=0 Nov 22 08:32:54 crc kubenswrapper[4853]: I1122 08:32:54.047196 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerDied","Data":"bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb"} Nov 22 08:32:54 crc kubenswrapper[4853]: I1122 08:32:54.052722 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:32:55 crc kubenswrapper[4853]: I1122 08:32:55.063309 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerStarted","Data":"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec"} Nov 22 08:32:57 crc kubenswrapper[4853]: I1122 08:32:57.088648 4853 generic.go:334] "Generic (PLEG): container finished" podID="95cbec14-095a-4f81-a786-db7c2215a57e" containerID="d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec" exitCode=0 Nov 22 08:32:57 crc kubenswrapper[4853]: I1122 08:32:57.088790 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerDied","Data":"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec"} Nov 22 08:32:58 crc kubenswrapper[4853]: I1122 08:32:58.102952 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerStarted","Data":"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553"} Nov 22 08:32:58 crc kubenswrapper[4853]: I1122 08:32:58.123184 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j6bhv" podStartSLOduration=3.450575471 podStartE2EDuration="7.123161043s" podCreationTimestamp="2025-11-22 08:32:51 +0000 UTC" firstStartedPulling="2025-11-22 08:32:54.052034629 +0000 UTC m=+4972.892657285" lastFinishedPulling="2025-11-22 08:32:57.724620231 +0000 UTC m=+4976.565242857" observedRunningTime="2025-11-22 08:32:58.120394698 +0000 UTC m=+4976.961017334" watchObservedRunningTime="2025-11-22 08:32:58.123161043 +0000 UTC m=+4976.963783689" Nov 22 08:32:58 crc kubenswrapper[4853]: I1122 08:32:58.919381 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:32:58 crc kubenswrapper[4853]: I1122 08:32:58.923135 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:58 crc kubenswrapper[4853]: I1122 08:32:58.940322 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.101229 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.101295 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjck\" (UniqueName: \"kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.101331 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.203966 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.204056 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjck\" (UniqueName: \"kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.204110 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.204820 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.205088 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.227424 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjck\" (UniqueName: \"kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck\") pod \"community-operators-qxl44\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.260230 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:32:59 crc kubenswrapper[4853]: I1122 08:32:59.790579 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:32:59 crc kubenswrapper[4853]: W1122 08:32:59.800037 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf945bbb_417d_4a70_b06f_569e71a5b391.slice/crio-b85ad9bd02d11c803764bed5f2f53ebcd494d16f4a590572a5848faaec531548 WatchSource:0}: Error finding container b85ad9bd02d11c803764bed5f2f53ebcd494d16f4a590572a5848faaec531548: Status 404 returned error can't find the container with id b85ad9bd02d11c803764bed5f2f53ebcd494d16f4a590572a5848faaec531548 Nov 22 08:33:00 crc kubenswrapper[4853]: I1122 08:33:00.129317 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerID="8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77" exitCode=0 Nov 22 08:33:00 crc kubenswrapper[4853]: I1122 08:33:00.129394 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerDied","Data":"8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77"} Nov 22 08:33:00 crc kubenswrapper[4853]: I1122 08:33:00.129617 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerStarted","Data":"b85ad9bd02d11c803764bed5f2f53ebcd494d16f4a590572a5848faaec531548"} Nov 22 08:33:02 crc kubenswrapper[4853]: I1122 08:33:02.155578 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerStarted","Data":"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b"} Nov 22 08:33:02 crc kubenswrapper[4853]: I1122 08:33:02.321018 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:02 crc kubenswrapper[4853]: I1122 08:33:02.321069 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:02 crc kubenswrapper[4853]: I1122 08:33:02.381064 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:03 crc kubenswrapper[4853]: I1122 08:33:03.895877 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:04 crc kubenswrapper[4853]: I1122 08:33:04.551517 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:33:05 crc kubenswrapper[4853]: I1122 08:33:05.187557 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j6bhv" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="registry-server" containerID="cri-o://4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553" gracePeriod=2 Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.203905 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerID="60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b" exitCode=0 Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.203989 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerDied","Data":"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b"} Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.850361 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.922488 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgvd\" (UniqueName: \"kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd\") pod \"95cbec14-095a-4f81-a786-db7c2215a57e\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.922687 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content\") pod \"95cbec14-095a-4f81-a786-db7c2215a57e\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.922758 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities\") pod \"95cbec14-095a-4f81-a786-db7c2215a57e\" (UID: \"95cbec14-095a-4f81-a786-db7c2215a57e\") " Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.923631 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities" (OuterVolumeSpecName: "utilities") pod "95cbec14-095a-4f81-a786-db7c2215a57e" (UID: "95cbec14-095a-4f81-a786-db7c2215a57e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.925187 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.932150 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd" (OuterVolumeSpecName: "kube-api-access-ccgvd") pod "95cbec14-095a-4f81-a786-db7c2215a57e" (UID: "95cbec14-095a-4f81-a786-db7c2215a57e"). InnerVolumeSpecName "kube-api-access-ccgvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:06 crc kubenswrapper[4853]: I1122 08:33:06.977212 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95cbec14-095a-4f81-a786-db7c2215a57e" (UID: "95cbec14-095a-4f81-a786-db7c2215a57e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.027438 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccgvd\" (UniqueName: \"kubernetes.io/projected/95cbec14-095a-4f81-a786-db7c2215a57e-kube-api-access-ccgvd\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.027480 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95cbec14-095a-4f81-a786-db7c2215a57e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.223952 4853 generic.go:334] "Generic (PLEG): container finished" podID="95cbec14-095a-4f81-a786-db7c2215a57e" containerID="4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553" exitCode=0 Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.224041 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j6bhv" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.224078 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerDied","Data":"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553"} Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.224150 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j6bhv" event={"ID":"95cbec14-095a-4f81-a786-db7c2215a57e","Type":"ContainerDied","Data":"2789d3e6b52158d3e6753fec093c0e02e0d30e934b3db894b5af9005c6aeb4e4"} Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.224186 4853 scope.go:117] "RemoveContainer" containerID="4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.229501 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerStarted","Data":"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c"} Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.257762 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qxl44" podStartSLOduration=2.762989271 podStartE2EDuration="9.257737741s" podCreationTimestamp="2025-11-22 08:32:58 +0000 UTC" firstStartedPulling="2025-11-22 08:33:00.131206766 +0000 UTC m=+4978.971829392" lastFinishedPulling="2025-11-22 08:33:06.625955236 +0000 UTC m=+4985.466577862" observedRunningTime="2025-11-22 08:33:07.251871463 +0000 UTC m=+4986.092494099" watchObservedRunningTime="2025-11-22 08:33:07.257737741 +0000 UTC m=+4986.098360367" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.266617 4853 scope.go:117] "RemoveContainer" containerID="d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.275294 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.285164 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j6bhv"] Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.293806 4853 scope.go:117] "RemoveContainer" containerID="bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.353703 4853 scope.go:117] "RemoveContainer" containerID="4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553" Nov 22 08:33:07 crc kubenswrapper[4853]: E1122 08:33:07.354324 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553\": container with ID starting with 4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553 not found: ID does not exist" containerID="4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.354399 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553"} err="failed to get container status \"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553\": rpc error: code = NotFound desc = could not find container \"4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553\": container with ID starting with 4de0eea71b0d8176b88a63eb6ecb5ca7236c798740aa0558db047d088824b553 not found: ID does not exist" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.354450 4853 scope.go:117] "RemoveContainer" containerID="d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec" Nov 22 08:33:07 crc kubenswrapper[4853]: E1122 08:33:07.354994 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec\": container with ID starting with d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec not found: ID does not exist" containerID="d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.355032 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec"} err="failed to get container status \"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec\": rpc error: code = NotFound desc = could not find container \"d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec\": container with ID starting with d931897f80b4b27fd7f00c68000c492d551562e6b3109b390c4b1da8ecf39cec not found: ID does not exist" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.355065 4853 scope.go:117] "RemoveContainer" containerID="bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb" Nov 22 08:33:07 crc kubenswrapper[4853]: E1122 08:33:07.355310 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb\": container with ID starting with bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb not found: ID does not exist" containerID="bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.355330 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb"} err="failed to get container status \"bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb\": rpc error: code = NotFound desc = could not find container \"bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb\": container with ID starting with bb152e42cd172d7dcafdde197b0284fcef34d6316befec79a79d97c08a7e51cb not found: ID does not exist" Nov 22 08:33:07 crc kubenswrapper[4853]: I1122 08:33:07.762033 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" path="/var/lib/kubelet/pods/95cbec14-095a-4f81-a786-db7c2215a57e/volumes" Nov 22 08:33:09 crc kubenswrapper[4853]: I1122 08:33:09.260895 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:09 crc kubenswrapper[4853]: I1122 08:33:09.260962 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:09 crc kubenswrapper[4853]: I1122 08:33:09.310685 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:19 crc kubenswrapper[4853]: I1122 08:33:19.320385 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:19 crc kubenswrapper[4853]: I1122 08:33:19.374607 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:33:19 crc kubenswrapper[4853]: I1122 08:33:19.374858 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qxl44" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="registry-server" containerID="cri-o://c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c" gracePeriod=2 Nov 22 08:33:19 crc kubenswrapper[4853]: I1122 08:33:19.914785 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.038997 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfjck\" (UniqueName: \"kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck\") pod \"cf945bbb-417d-4a70-b06f-569e71a5b391\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.039089 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities\") pod \"cf945bbb-417d-4a70-b06f-569e71a5b391\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.039421 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content\") pod \"cf945bbb-417d-4a70-b06f-569e71a5b391\" (UID: \"cf945bbb-417d-4a70-b06f-569e71a5b391\") " Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.040043 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities" (OuterVolumeSpecName: "utilities") pod "cf945bbb-417d-4a70-b06f-569e71a5b391" (UID: "cf945bbb-417d-4a70-b06f-569e71a5b391"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.040372 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.049195 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck" (OuterVolumeSpecName: "kube-api-access-dfjck") pod "cf945bbb-417d-4a70-b06f-569e71a5b391" (UID: "cf945bbb-417d-4a70-b06f-569e71a5b391"). InnerVolumeSpecName "kube-api-access-dfjck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.105846 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf945bbb-417d-4a70-b06f-569e71a5b391" (UID: "cf945bbb-417d-4a70-b06f-569e71a5b391"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.144563 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfjck\" (UniqueName: \"kubernetes.io/projected/cf945bbb-417d-4a70-b06f-569e71a5b391-kube-api-access-dfjck\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.144685 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf945bbb-417d-4a70-b06f-569e71a5b391-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.389468 4853 generic.go:334] "Generic (PLEG): container finished" podID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerID="c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c" exitCode=0 Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.389517 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerDied","Data":"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c"} Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.389859 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qxl44" event={"ID":"cf945bbb-417d-4a70-b06f-569e71a5b391","Type":"ContainerDied","Data":"b85ad9bd02d11c803764bed5f2f53ebcd494d16f4a590572a5848faaec531548"} Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.389600 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qxl44" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.389889 4853 scope.go:117] "RemoveContainer" containerID="c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.421113 4853 scope.go:117] "RemoveContainer" containerID="60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.426375 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.437065 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qxl44"] Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.445717 4853 scope.go:117] "RemoveContainer" containerID="8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.502813 4853 scope.go:117] "RemoveContainer" containerID="c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c" Nov 22 08:33:20 crc kubenswrapper[4853]: E1122 08:33:20.503382 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c\": container with ID starting with c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c not found: ID does not exist" containerID="c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.503420 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c"} err="failed to get container status \"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c\": rpc error: code = NotFound desc = could not find container \"c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c\": container with ID starting with c08594662ee98d39150895d61eb28d9109af3c828db79623ef4a34a99013257c not found: ID does not exist" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.503446 4853 scope.go:117] "RemoveContainer" containerID="60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b" Nov 22 08:33:20 crc kubenswrapper[4853]: E1122 08:33:20.503777 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b\": container with ID starting with 60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b not found: ID does not exist" containerID="60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.503796 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b"} err="failed to get container status \"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b\": rpc error: code = NotFound desc = could not find container \"60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b\": container with ID starting with 60ef4acce3936b008e451cb72a2e231e4db718c6ed18c9fef44af02187a74d6b not found: ID does not exist" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.503811 4853 scope.go:117] "RemoveContainer" containerID="8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77" Nov 22 08:33:20 crc kubenswrapper[4853]: E1122 08:33:20.504132 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77\": container with ID starting with 8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77 not found: ID does not exist" containerID="8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77" Nov 22 08:33:20 crc kubenswrapper[4853]: I1122 08:33:20.504153 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77"} err="failed to get container status \"8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77\": rpc error: code = NotFound desc = could not find container \"8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77\": container with ID starting with 8ed47ee75a7f476f28ec63b662bbd8f12abd6dfd8ec20f60eb688893784a7f77 not found: ID does not exist" Nov 22 08:33:21 crc kubenswrapper[4853]: I1122 08:33:21.763907 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" path="/var/lib/kubelet/pods/cf945bbb-417d-4a70-b06f-569e71a5b391/volumes" Nov 22 08:34:14 crc kubenswrapper[4853]: E1122 08:34:14.913647 4853 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.251:60696->38.102.83.251:37237: write tcp 38.102.83.251:60696->38.102.83.251:37237: write: broken pipe Nov 22 08:34:31 crc kubenswrapper[4853]: I1122 08:34:31.296922 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:34:31 crc kubenswrapper[4853]: I1122 08:34:31.298363 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:35:01 crc kubenswrapper[4853]: I1122 08:35:01.297438 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:35:01 crc kubenswrapper[4853]: I1122 08:35:01.298257 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.297676 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.298342 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.298407 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.299611 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.299698 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d" gracePeriod=600 Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.988340 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d" exitCode=0 Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.988570 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d"} Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.988868 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092"} Nov 22 08:35:31 crc kubenswrapper[4853]: I1122 08:35:31.988899 4853 scope.go:117] "RemoveContainer" containerID="308a9f775315b75fdc2fc710a50cb8a83149656c872038e11489a5167064fc08" Nov 22 08:37:31 crc kubenswrapper[4853]: I1122 08:37:31.297603 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:37:31 crc kubenswrapper[4853]: I1122 08:37:31.298316 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:01 crc kubenswrapper[4853]: I1122 08:38:01.297486 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:38:01 crc kubenswrapper[4853]: I1122 08:38:01.298147 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:11 crc kubenswrapper[4853]: E1122 08:38:11.760732 4853 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.251:33442->38.102.83.251:37237: read tcp 38.102.83.251:33442->38.102.83.251:37237: read: connection reset by peer Nov 22 08:38:31 crc kubenswrapper[4853]: I1122 08:38:31.297559 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:38:31 crc kubenswrapper[4853]: I1122 08:38:31.298632 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:31 crc kubenswrapper[4853]: I1122 08:38:31.298726 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:38:31 crc kubenswrapper[4853]: I1122 08:38:31.300328 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:38:31 crc kubenswrapper[4853]: I1122 08:38:31.300444 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" gracePeriod=600 Nov 22 08:38:31 crc kubenswrapper[4853]: E1122 08:38:31.429958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:38:32 crc kubenswrapper[4853]: I1122 08:38:32.127544 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" exitCode=0 Nov 22 08:38:32 crc kubenswrapper[4853]: I1122 08:38:32.127615 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092"} Nov 22 08:38:32 crc kubenswrapper[4853]: I1122 08:38:32.128142 4853 scope.go:117] "RemoveContainer" containerID="8c3be09df9d116cf9965d5b368358b5325d22a227e0546a23cbd8f67078e5f0d" Nov 22 08:38:32 crc kubenswrapper[4853]: I1122 08:38:32.130797 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:38:32 crc kubenswrapper[4853]: E1122 08:38:32.131508 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.209786 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211535 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211560 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211574 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="extract-utilities" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211583 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="extract-utilities" Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211608 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="extract-content" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211617 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="extract-content" Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211665 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="extract-content" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211674 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="extract-content" Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211701 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="extract-utilities" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211709 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="extract-utilities" Nov 22 08:38:45 crc kubenswrapper[4853]: E1122 08:38:45.211719 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.211727 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.212113 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="95cbec14-095a-4f81-a786-db7c2215a57e" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.212163 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf945bbb-417d-4a70-b06f-569e71a5b391" containerName="registry-server" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.215024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.225588 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.293710 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bt47\" (UniqueName: \"kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.293769 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.293834 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.395915 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.396301 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bt47\" (UniqueName: \"kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.396383 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.396413 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.396622 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.428040 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bt47\" (UniqueName: \"kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47\") pod \"redhat-marketplace-j48nd\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:45 crc kubenswrapper[4853]: I1122 08:38:45.540946 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:46 crc kubenswrapper[4853]: I1122 08:38:46.035960 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:38:46 crc kubenswrapper[4853]: I1122 08:38:46.295971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerStarted","Data":"3b5ab9614c239af0191e486c3f3e1d00f761562a81be48645bc13f7528f4d95b"} Nov 22 08:38:46 crc kubenswrapper[4853]: I1122 08:38:46.748647 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:38:46 crc kubenswrapper[4853]: E1122 08:38:46.749163 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:38:47 crc kubenswrapper[4853]: I1122 08:38:47.312996 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerID="68c70671da02a58f148b3b6ba61e1ada3c9e9b7c7c90323ab2d5f5e2d1733c69" exitCode=0 Nov 22 08:38:47 crc kubenswrapper[4853]: I1122 08:38:47.313082 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerDied","Data":"68c70671da02a58f148b3b6ba61e1ada3c9e9b7c7c90323ab2d5f5e2d1733c69"} Nov 22 08:38:47 crc kubenswrapper[4853]: I1122 08:38:47.317113 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:38:49 crc kubenswrapper[4853]: I1122 08:38:49.357997 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerID="e4078b22bc3334e97a8cf0dcc080770f995786bf11697d4de42d5b542e0989e2" exitCode=0 Nov 22 08:38:49 crc kubenswrapper[4853]: I1122 08:38:49.358118 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerDied","Data":"e4078b22bc3334e97a8cf0dcc080770f995786bf11697d4de42d5b542e0989e2"} Nov 22 08:38:51 crc kubenswrapper[4853]: I1122 08:38:51.392652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerStarted","Data":"c2e7dcd797c2d13106efac963f3e0477dddaeb7094c226eb61d415088b524568"} Nov 22 08:38:51 crc kubenswrapper[4853]: I1122 08:38:51.421823 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j48nd" podStartSLOduration=3.947287239 podStartE2EDuration="6.42180378s" podCreationTimestamp="2025-11-22 08:38:45 +0000 UTC" firstStartedPulling="2025-11-22 08:38:47.316843111 +0000 UTC m=+5326.157465737" lastFinishedPulling="2025-11-22 08:38:49.791359652 +0000 UTC m=+5328.631982278" observedRunningTime="2025-11-22 08:38:51.415085608 +0000 UTC m=+5330.255708244" watchObservedRunningTime="2025-11-22 08:38:51.42180378 +0000 UTC m=+5330.262426406" Nov 22 08:38:55 crc kubenswrapper[4853]: I1122 08:38:55.541346 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:55 crc kubenswrapper[4853]: I1122 08:38:55.542018 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:55 crc kubenswrapper[4853]: I1122 08:38:55.603972 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:56 crc kubenswrapper[4853]: I1122 08:38:56.520662 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:56 crc kubenswrapper[4853]: I1122 08:38:56.587030 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:38:58 crc kubenswrapper[4853]: I1122 08:38:58.469313 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j48nd" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="registry-server" containerID="cri-o://c2e7dcd797c2d13106efac963f3e0477dddaeb7094c226eb61d415088b524568" gracePeriod=2 Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.481661 4853 generic.go:334] "Generic (PLEG): container finished" podID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerID="c2e7dcd797c2d13106efac963f3e0477dddaeb7094c226eb61d415088b524568" exitCode=0 Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.481764 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerDied","Data":"c2e7dcd797c2d13106efac963f3e0477dddaeb7094c226eb61d415088b524568"} Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.482047 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48nd" event={"ID":"f4123e7d-d877-4fb6-944b-612aec835ef7","Type":"ContainerDied","Data":"3b5ab9614c239af0191e486c3f3e1d00f761562a81be48645bc13f7528f4d95b"} Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.482063 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b5ab9614c239af0191e486c3f3e1d00f761562a81be48645bc13f7528f4d95b" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.582847 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.687842 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bt47\" (UniqueName: \"kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47\") pod \"f4123e7d-d877-4fb6-944b-612aec835ef7\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.687985 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content\") pod \"f4123e7d-d877-4fb6-944b-612aec835ef7\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.688249 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities\") pod \"f4123e7d-d877-4fb6-944b-612aec835ef7\" (UID: \"f4123e7d-d877-4fb6-944b-612aec835ef7\") " Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.689183 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities" (OuterVolumeSpecName: "utilities") pod "f4123e7d-d877-4fb6-944b-612aec835ef7" (UID: "f4123e7d-d877-4fb6-944b-612aec835ef7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.693295 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47" (OuterVolumeSpecName: "kube-api-access-9bt47") pod "f4123e7d-d877-4fb6-944b-612aec835ef7" (UID: "f4123e7d-d877-4fb6-944b-612aec835ef7"). InnerVolumeSpecName "kube-api-access-9bt47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.705197 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f4123e7d-d877-4fb6-944b-612aec835ef7" (UID: "f4123e7d-d877-4fb6-944b-612aec835ef7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.748252 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:38:59 crc kubenswrapper[4853]: E1122 08:38:59.748908 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.790961 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bt47\" (UniqueName: \"kubernetes.io/projected/f4123e7d-d877-4fb6-944b-612aec835ef7-kube-api-access-9bt47\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.791003 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:59 crc kubenswrapper[4853]: I1122 08:38:59.791013 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f4123e7d-d877-4fb6-944b-612aec835ef7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:00 crc kubenswrapper[4853]: I1122 08:39:00.495861 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48nd" Nov 22 08:39:00 crc kubenswrapper[4853]: I1122 08:39:00.525390 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:39:00 crc kubenswrapper[4853]: I1122 08:39:00.538046 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48nd"] Nov 22 08:39:01 crc kubenswrapper[4853]: I1122 08:39:01.762941 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" path="/var/lib/kubelet/pods/f4123e7d-d877-4fb6-944b-612aec835ef7/volumes" Nov 22 08:39:12 crc kubenswrapper[4853]: I1122 08:39:12.748460 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:39:12 crc kubenswrapper[4853]: E1122 08:39:12.750458 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:39:23 crc kubenswrapper[4853]: I1122 08:39:23.748289 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:39:23 crc kubenswrapper[4853]: E1122 08:39:23.750070 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:39:37 crc kubenswrapper[4853]: I1122 08:39:37.748339 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:39:37 crc kubenswrapper[4853]: E1122 08:39:37.749415 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:39:50 crc kubenswrapper[4853]: I1122 08:39:50.748333 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:39:50 crc kubenswrapper[4853]: E1122 08:39:50.749508 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:40:02 crc kubenswrapper[4853]: I1122 08:40:02.747523 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:40:02 crc kubenswrapper[4853]: E1122 08:40:02.750945 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:40:16 crc kubenswrapper[4853]: I1122 08:40:16.748494 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:40:16 crc kubenswrapper[4853]: E1122 08:40:16.749476 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:40:30 crc kubenswrapper[4853]: I1122 08:40:30.748160 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:40:30 crc kubenswrapper[4853]: E1122 08:40:30.748931 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:40:45 crc kubenswrapper[4853]: I1122 08:40:45.756006 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:40:45 crc kubenswrapper[4853]: E1122 08:40:45.775888 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:40:56 crc kubenswrapper[4853]: I1122 08:40:56.747937 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:40:56 crc kubenswrapper[4853]: E1122 08:40:56.748776 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:41:07 crc kubenswrapper[4853]: I1122 08:41:07.748080 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:41:07 crc kubenswrapper[4853]: E1122 08:41:07.749320 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:41:18 crc kubenswrapper[4853]: I1122 08:41:18.748506 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:41:18 crc kubenswrapper[4853]: E1122 08:41:18.749678 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:41:32 crc kubenswrapper[4853]: I1122 08:41:32.748613 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:41:32 crc kubenswrapper[4853]: E1122 08:41:32.749615 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:41:44 crc kubenswrapper[4853]: I1122 08:41:44.748303 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:41:44 crc kubenswrapper[4853]: E1122 08:41:44.749139 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:41:55 crc kubenswrapper[4853]: I1122 08:41:55.756197 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:41:55 crc kubenswrapper[4853]: E1122 08:41:55.757147 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:42:09 crc kubenswrapper[4853]: I1122 08:42:09.748442 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:42:09 crc kubenswrapper[4853]: E1122 08:42:09.749255 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:42:23 crc kubenswrapper[4853]: I1122 08:42:23.748585 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:42:23 crc kubenswrapper[4853]: E1122 08:42:23.749416 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:42:36 crc kubenswrapper[4853]: I1122 08:42:36.749096 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:42:36 crc kubenswrapper[4853]: E1122 08:42:36.750613 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:42:48 crc kubenswrapper[4853]: I1122 08:42:48.748380 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:42:48 crc kubenswrapper[4853]: E1122 08:42:48.749484 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:43:00 crc kubenswrapper[4853]: I1122 08:43:00.749010 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:43:00 crc kubenswrapper[4853]: E1122 08:43:00.750044 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.696981 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:09 crc kubenswrapper[4853]: E1122 08:43:09.698441 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="extract-content" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.698460 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="extract-content" Nov 22 08:43:09 crc kubenswrapper[4853]: E1122 08:43:09.698485 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="extract-utilities" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.698495 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="extract-utilities" Nov 22 08:43:09 crc kubenswrapper[4853]: E1122 08:43:09.698541 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="registry-server" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.698548 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="registry-server" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.699918 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4123e7d-d877-4fb6-944b-612aec835ef7" containerName="registry-server" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.704534 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.714191 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.816807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.816945 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gqs\" (UniqueName: \"kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.816971 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.919535 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.919628 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gqs\" (UniqueName: \"kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.919649 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.920137 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.920303 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:09 crc kubenswrapper[4853]: I1122 08:43:09.956812 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gqs\" (UniqueName: \"kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs\") pod \"certified-operators-qxxpv\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:10 crc kubenswrapper[4853]: I1122 08:43:10.033437 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:10 crc kubenswrapper[4853]: I1122 08:43:10.615655 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:11 crc kubenswrapper[4853]: I1122 08:43:11.498096 4853 generic.go:334] "Generic (PLEG): container finished" podID="9aaefba4-b308-4466-9189-2681b0949cdd" containerID="ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a" exitCode=0 Nov 22 08:43:11 crc kubenswrapper[4853]: I1122 08:43:11.498220 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerDied","Data":"ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a"} Nov 22 08:43:11 crc kubenswrapper[4853]: I1122 08:43:11.498418 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerStarted","Data":"3cd942fb055203d84068662930c5b608bc6860c0217fe9367bce610e57f29467"} Nov 22 08:43:13 crc kubenswrapper[4853]: I1122 08:43:13.522131 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerStarted","Data":"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a"} Nov 22 08:43:13 crc kubenswrapper[4853]: I1122 08:43:13.749067 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:43:13 crc kubenswrapper[4853]: E1122 08:43:13.751267 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:43:14 crc kubenswrapper[4853]: I1122 08:43:14.538526 4853 generic.go:334] "Generic (PLEG): container finished" podID="9aaefba4-b308-4466-9189-2681b0949cdd" containerID="b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a" exitCode=0 Nov 22 08:43:14 crc kubenswrapper[4853]: I1122 08:43:14.538639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerDied","Data":"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a"} Nov 22 08:43:15 crc kubenswrapper[4853]: I1122 08:43:15.551895 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerStarted","Data":"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453"} Nov 22 08:43:15 crc kubenswrapper[4853]: I1122 08:43:15.581331 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qxxpv" podStartSLOduration=3.110049277 podStartE2EDuration="6.581313054s" podCreationTimestamp="2025-11-22 08:43:09 +0000 UTC" firstStartedPulling="2025-11-22 08:43:11.500290275 +0000 UTC m=+5590.340912901" lastFinishedPulling="2025-11-22 08:43:14.971554052 +0000 UTC m=+5593.812176678" observedRunningTime="2025-11-22 08:43:15.572851374 +0000 UTC m=+5594.413474000" watchObservedRunningTime="2025-11-22 08:43:15.581313054 +0000 UTC m=+5594.421935680" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.432108 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.435481 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.462650 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.493763 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrnrc\" (UniqueName: \"kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.494111 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.494141 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.596986 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.597060 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.597493 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.597609 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.598275 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrnrc\" (UniqueName: \"kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.625948 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrnrc\" (UniqueName: \"kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc\") pod \"community-operators-7b4f2\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:16 crc kubenswrapper[4853]: I1122 08:43:16.756044 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:17 crc kubenswrapper[4853]: I1122 08:43:17.358130 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:17 crc kubenswrapper[4853]: I1122 08:43:17.574806 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerStarted","Data":"3b61cce4c5df0a723cbab52986f6464ff3c767a2ba2a97530269154917945402"} Nov 22 08:43:18 crc kubenswrapper[4853]: I1122 08:43:18.588807 4853 generic.go:334] "Generic (PLEG): container finished" podID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerID="769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa" exitCode=0 Nov 22 08:43:18 crc kubenswrapper[4853]: I1122 08:43:18.588912 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerDied","Data":"769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa"} Nov 22 08:43:20 crc kubenswrapper[4853]: I1122 08:43:20.034127 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:20 crc kubenswrapper[4853]: I1122 08:43:20.038423 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:20 crc kubenswrapper[4853]: I1122 08:43:20.099605 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:20 crc kubenswrapper[4853]: I1122 08:43:20.615289 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerStarted","Data":"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf"} Nov 22 08:43:21 crc kubenswrapper[4853]: I1122 08:43:21.515846 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:22 crc kubenswrapper[4853]: I1122 08:43:22.638957 4853 generic.go:334] "Generic (PLEG): container finished" podID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerID="109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf" exitCode=0 Nov 22 08:43:22 crc kubenswrapper[4853]: I1122 08:43:22.639029 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerDied","Data":"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf"} Nov 22 08:43:23 crc kubenswrapper[4853]: I1122 08:43:23.625656 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:23 crc kubenswrapper[4853]: I1122 08:43:23.650423 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qxxpv" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="registry-server" containerID="cri-o://7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453" gracePeriod=2 Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.236806 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.322143 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content\") pod \"9aaefba4-b308-4466-9189-2681b0949cdd\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.322202 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities\") pod \"9aaefba4-b308-4466-9189-2681b0949cdd\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.322614 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7gqs\" (UniqueName: \"kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs\") pod \"9aaefba4-b308-4466-9189-2681b0949cdd\" (UID: \"9aaefba4-b308-4466-9189-2681b0949cdd\") " Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.327475 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities" (OuterVolumeSpecName: "utilities") pod "9aaefba4-b308-4466-9189-2681b0949cdd" (UID: "9aaefba4-b308-4466-9189-2681b0949cdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.333383 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs" (OuterVolumeSpecName: "kube-api-access-q7gqs") pod "9aaefba4-b308-4466-9189-2681b0949cdd" (UID: "9aaefba4-b308-4466-9189-2681b0949cdd"). InnerVolumeSpecName "kube-api-access-q7gqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.379360 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9aaefba4-b308-4466-9189-2681b0949cdd" (UID: "9aaefba4-b308-4466-9189-2681b0949cdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.425895 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7gqs\" (UniqueName: \"kubernetes.io/projected/9aaefba4-b308-4466-9189-2681b0949cdd-kube-api-access-q7gqs\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.426163 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.426231 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9aaefba4-b308-4466-9189-2681b0949cdd-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.664923 4853 generic.go:334] "Generic (PLEG): container finished" podID="9aaefba4-b308-4466-9189-2681b0949cdd" containerID="7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453" exitCode=0 Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.665000 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qxxpv" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.665054 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerDied","Data":"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453"} Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.665469 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qxxpv" event={"ID":"9aaefba4-b308-4466-9189-2681b0949cdd","Type":"ContainerDied","Data":"3cd942fb055203d84068662930c5b608bc6860c0217fe9367bce610e57f29467"} Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.665503 4853 scope.go:117] "RemoveContainer" containerID="7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.668782 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerStarted","Data":"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec"} Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.689614 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7b4f2" podStartSLOduration=4.271318913 podStartE2EDuration="8.689592506s" podCreationTimestamp="2025-11-22 08:43:16 +0000 UTC" firstStartedPulling="2025-11-22 08:43:18.591370317 +0000 UTC m=+5597.431992943" lastFinishedPulling="2025-11-22 08:43:23.00964391 +0000 UTC m=+5601.850266536" observedRunningTime="2025-11-22 08:43:24.683528211 +0000 UTC m=+5603.524150867" watchObservedRunningTime="2025-11-22 08:43:24.689592506 +0000 UTC m=+5603.530215132" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.691791 4853 scope.go:117] "RemoveContainer" containerID="b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.715554 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.729163 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qxxpv"] Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.736143 4853 scope.go:117] "RemoveContainer" containerID="ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.825569 4853 scope.go:117] "RemoveContainer" containerID="7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453" Nov 22 08:43:24 crc kubenswrapper[4853]: E1122 08:43:24.825975 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453\": container with ID starting with 7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453 not found: ID does not exist" containerID="7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.826015 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453"} err="failed to get container status \"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453\": rpc error: code = NotFound desc = could not find container \"7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453\": container with ID starting with 7707aa29dd87b33e3dff6830f432f7d7656f0f1b0be4816f20f4834179e1a453 not found: ID does not exist" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.826034 4853 scope.go:117] "RemoveContainer" containerID="b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a" Nov 22 08:43:24 crc kubenswrapper[4853]: E1122 08:43:24.826424 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a\": container with ID starting with b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a not found: ID does not exist" containerID="b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.826450 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a"} err="failed to get container status \"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a\": rpc error: code = NotFound desc = could not find container \"b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a\": container with ID starting with b5dbcd7b7ad8ebc7c7fba2b93cace9a99c74b95efb6f705581bf4d756866687a not found: ID does not exist" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.826464 4853 scope.go:117] "RemoveContainer" containerID="ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a" Nov 22 08:43:24 crc kubenswrapper[4853]: E1122 08:43:24.826829 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a\": container with ID starting with ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a not found: ID does not exist" containerID="ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a" Nov 22 08:43:24 crc kubenswrapper[4853]: I1122 08:43:24.826853 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a"} err="failed to get container status \"ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a\": rpc error: code = NotFound desc = could not find container \"ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a\": container with ID starting with ada7acc258d4278addc8e28684fdb49236fc52b01a9337e13614f266964fd24a not found: ID does not exist" Nov 22 08:43:25 crc kubenswrapper[4853]: I1122 08:43:25.761649 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" path="/var/lib/kubelet/pods/9aaefba4-b308-4466-9189-2681b0949cdd/volumes" Nov 22 08:43:26 crc kubenswrapper[4853]: I1122 08:43:26.749079 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:43:26 crc kubenswrapper[4853]: E1122 08:43:26.749513 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:43:26 crc kubenswrapper[4853]: I1122 08:43:26.756911 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:26 crc kubenswrapper[4853]: I1122 08:43:26.756968 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:26 crc kubenswrapper[4853]: I1122 08:43:26.814584 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.815268 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:43:29 crc kubenswrapper[4853]: E1122 08:43:29.816169 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="registry-server" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.816182 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="registry-server" Nov 22 08:43:29 crc kubenswrapper[4853]: E1122 08:43:29.816210 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="extract-utilities" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.816216 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="extract-utilities" Nov 22 08:43:29 crc kubenswrapper[4853]: E1122 08:43:29.816248 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="extract-content" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.816254 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="extract-content" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.816492 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aaefba4-b308-4466-9189-2681b0949cdd" containerName="registry-server" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.818250 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.826396 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.877450 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.878004 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.878043 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8n79\" (UniqueName: \"kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.980518 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.980603 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8n79\" (UniqueName: \"kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.981017 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.981112 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:29 crc kubenswrapper[4853]: I1122 08:43:29.981421 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:30 crc kubenswrapper[4853]: I1122 08:43:30.001367 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8n79\" (UniqueName: \"kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79\") pod \"redhat-operators-dw2lk\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:30 crc kubenswrapper[4853]: I1122 08:43:30.142453 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:30 crc kubenswrapper[4853]: I1122 08:43:30.628537 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:43:30 crc kubenswrapper[4853]: I1122 08:43:30.748437 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerStarted","Data":"e4774934ea72d2c7f24ed506735742c4d5ba98168c7f98ba1aa9315aa6614c8d"} Nov 22 08:43:31 crc kubenswrapper[4853]: I1122 08:43:31.763257 4853 generic.go:334] "Generic (PLEG): container finished" podID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerID="3e09dfe949d1c60a20b838aaec683114c3f940d9785ff0111c7562405d5f0b5e" exitCode=0 Nov 22 08:43:31 crc kubenswrapper[4853]: I1122 08:43:31.763372 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerDied","Data":"3e09dfe949d1c60a20b838aaec683114c3f940d9785ff0111c7562405d5f0b5e"} Nov 22 08:43:33 crc kubenswrapper[4853]: I1122 08:43:33.790907 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerStarted","Data":"8047665eae86957d583624bdef178f9b3d691818bdf1573ccb025bebe5a44f46"} Nov 22 08:43:36 crc kubenswrapper[4853]: I1122 08:43:36.816794 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:36 crc kubenswrapper[4853]: I1122 08:43:36.883988 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:37 crc kubenswrapper[4853]: I1122 08:43:37.835974 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7b4f2" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="registry-server" containerID="cri-o://54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec" gracePeriod=2 Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.412117 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.509628 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrnrc\" (UniqueName: \"kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc\") pod \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.509699 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities\") pod \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.509957 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content\") pod \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\" (UID: \"2258cfdf-1983-41d3-95f2-e45ee84fa2bf\") " Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.510586 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities" (OuterVolumeSpecName: "utilities") pod "2258cfdf-1983-41d3-95f2-e45ee84fa2bf" (UID: "2258cfdf-1983-41d3-95f2-e45ee84fa2bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.511231 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.517859 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc" (OuterVolumeSpecName: "kube-api-access-zrnrc") pod "2258cfdf-1983-41d3-95f2-e45ee84fa2bf" (UID: "2258cfdf-1983-41d3-95f2-e45ee84fa2bf"). InnerVolumeSpecName "kube-api-access-zrnrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.572076 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2258cfdf-1983-41d3-95f2-e45ee84fa2bf" (UID: "2258cfdf-1983-41d3-95f2-e45ee84fa2bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.613408 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrnrc\" (UniqueName: \"kubernetes.io/projected/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-kube-api-access-zrnrc\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.613470 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2258cfdf-1983-41d3-95f2-e45ee84fa2bf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.855089 4853 generic.go:334] "Generic (PLEG): container finished" podID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerID="54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec" exitCode=0 Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.855156 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerDied","Data":"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec"} Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.855234 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7b4f2" event={"ID":"2258cfdf-1983-41d3-95f2-e45ee84fa2bf","Type":"ContainerDied","Data":"3b61cce4c5df0a723cbab52986f6464ff3c767a2ba2a97530269154917945402"} Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.855229 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7b4f2" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.855252 4853 scope.go:117] "RemoveContainer" containerID="54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.885140 4853 scope.go:117] "RemoveContainer" containerID="109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.892364 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.904504 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7b4f2"] Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.917581 4853 scope.go:117] "RemoveContainer" containerID="769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.975472 4853 scope.go:117] "RemoveContainer" containerID="54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec" Nov 22 08:43:38 crc kubenswrapper[4853]: E1122 08:43:38.975933 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec\": container with ID starting with 54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec not found: ID does not exist" containerID="54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.975974 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec"} err="failed to get container status \"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec\": rpc error: code = NotFound desc = could not find container \"54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec\": container with ID starting with 54ab2d9c973c6fb39eaff0647a9dbc303a386cbcea9999876c3fa6b5b100d3ec not found: ID does not exist" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.976013 4853 scope.go:117] "RemoveContainer" containerID="109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf" Nov 22 08:43:38 crc kubenswrapper[4853]: E1122 08:43:38.976942 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf\": container with ID starting with 109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf not found: ID does not exist" containerID="109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.976970 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf"} err="failed to get container status \"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf\": rpc error: code = NotFound desc = could not find container \"109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf\": container with ID starting with 109ebf8e9da44658519c1497f692c669966d0a35ec01bad3fc1bcbc64bca2adf not found: ID does not exist" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.976987 4853 scope.go:117] "RemoveContainer" containerID="769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa" Nov 22 08:43:38 crc kubenswrapper[4853]: E1122 08:43:38.977288 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa\": container with ID starting with 769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa not found: ID does not exist" containerID="769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa" Nov 22 08:43:38 crc kubenswrapper[4853]: I1122 08:43:38.977313 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa"} err="failed to get container status \"769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa\": rpc error: code = NotFound desc = could not find container \"769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa\": container with ID starting with 769147c8b753d4f6a014ffc8689dbb53b22e35b22becc23d59c86aa2a7c29bfa not found: ID does not exist" Nov 22 08:43:39 crc kubenswrapper[4853]: I1122 08:43:39.766931 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" path="/var/lib/kubelet/pods/2258cfdf-1983-41d3-95f2-e45ee84fa2bf/volumes" Nov 22 08:43:41 crc kubenswrapper[4853]: I1122 08:43:41.747945 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:43:42 crc kubenswrapper[4853]: I1122 08:43:42.910359 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc"} Nov 22 08:43:42 crc kubenswrapper[4853]: I1122 08:43:42.914380 4853 generic.go:334] "Generic (PLEG): container finished" podID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerID="8047665eae86957d583624bdef178f9b3d691818bdf1573ccb025bebe5a44f46" exitCode=0 Nov 22 08:43:42 crc kubenswrapper[4853]: I1122 08:43:42.914430 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerDied","Data":"8047665eae86957d583624bdef178f9b3d691818bdf1573ccb025bebe5a44f46"} Nov 22 08:43:43 crc kubenswrapper[4853]: I1122 08:43:43.933275 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerStarted","Data":"83258f6d4f5dd4d3c784085c33d65164c700d46ce0ece666f81046769f0f2253"} Nov 22 08:43:43 crc kubenswrapper[4853]: I1122 08:43:43.966843 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dw2lk" podStartSLOduration=3.300273878 podStartE2EDuration="14.966815435s" podCreationTimestamp="2025-11-22 08:43:29 +0000 UTC" firstStartedPulling="2025-11-22 08:43:31.765293129 +0000 UTC m=+5610.605915755" lastFinishedPulling="2025-11-22 08:43:43.431834686 +0000 UTC m=+5622.272457312" observedRunningTime="2025-11-22 08:43:43.961921751 +0000 UTC m=+5622.802544387" watchObservedRunningTime="2025-11-22 08:43:43.966815435 +0000 UTC m=+5622.807438061" Nov 22 08:43:50 crc kubenswrapper[4853]: I1122 08:43:50.143325 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:50 crc kubenswrapper[4853]: I1122 08:43:50.143787 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:43:51 crc kubenswrapper[4853]: I1122 08:43:51.195716 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:43:51 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:43:51 crc kubenswrapper[4853]: > Nov 22 08:44:01 crc kubenswrapper[4853]: I1122 08:44:01.190589 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:01 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:01 crc kubenswrapper[4853]: > Nov 22 08:44:11 crc kubenswrapper[4853]: I1122 08:44:11.197011 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:11 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:11 crc kubenswrapper[4853]: > Nov 22 08:44:16 crc kubenswrapper[4853]: I1122 08:44:16.959834 4853 patch_prober.go:28] interesting pod/console-859d4ccd9f-mfkwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:44:16 crc kubenswrapper[4853]: I1122 08:44:16.960350 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-859d4ccd9f-mfkwx" podUID="9ef15139-fdad-4e4c-a3bf-e1050c5bf716" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:17 crc kubenswrapper[4853]: I1122 08:44:17.759583 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 22 08:44:21 crc kubenswrapper[4853]: I1122 08:44:21.200233 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:21 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:21 crc kubenswrapper[4853]: > Nov 22 08:44:23 crc kubenswrapper[4853]: I1122 08:44:23.763370 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 22 08:44:24 crc kubenswrapper[4853]: I1122 08:44:24.658777 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="00c18e6e-23ef-45c1-b7ce-5efb6d47f001" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.206:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:24 crc kubenswrapper[4853]: I1122 08:44:24.756413 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="aa25b342-38ae-4493-8129-710611d886fa" containerName="prometheus" probeResult="failure" output="command timed out" Nov 22 08:44:24 crc kubenswrapper[4853]: I1122 08:44:24.757035 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="aa25b342-38ae-4493-8129-710611d886fa" containerName="prometheus" probeResult="failure" output="command timed out" Nov 22 08:44:25 crc kubenswrapper[4853]: I1122 08:44:25.959640 4853 patch_prober.go:28] interesting pod/console-859d4ccd9f-mfkwx container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded" start-of-body= Nov 22 08:44:25 crc kubenswrapper[4853]: I1122 08:44:25.959930 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-859d4ccd9f-mfkwx" podUID="9ef15139-fdad-4e4c-a3bf-e1050c5bf716" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": context deadline exceeded" Nov 22 08:44:25 crc kubenswrapper[4853]: I1122 08:44:25.959985 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 08:44:25 crc kubenswrapper[4853]: I1122 08:44:25.960835 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"5175cf9c836352f97332a2c0c6db07457d64d6ceafeca1b01793f0c6de4f5982"} pod="openshift-console/console-859d4ccd9f-mfkwx" containerMessage="Container console failed liveness probe, will be restarted" Nov 22 08:44:26 crc kubenswrapper[4853]: I1122 08:44:26.173376 4853 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:44:26 crc kubenswrapper[4853]: I1122 08:44:26.173461 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:27 crc kubenswrapper[4853]: I1122 08:44:27.264334 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" containerName="ceilometer-central-agent" probeResult="failure" output=< Nov 22 08:44:27 crc kubenswrapper[4853]: Unkown error: Expecting value: line 1 column 1 (char 0) Nov 22 08:44:27 crc kubenswrapper[4853]: > Nov 22 08:44:27 crc kubenswrapper[4853]: I1122 08:44:27.264624 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Nov 22 08:44:27 crc kubenswrapper[4853]: I1122 08:44:27.265648 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"2ba3c6e6d3f2f9e73e1bf4340dd1dd9ce1ae870dfb4e51a567cc99925348540c"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Nov 22 08:44:27 crc kubenswrapper[4853]: I1122 08:44:27.265728 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" containerName="ceilometer-central-agent" containerID="cri-o://2ba3c6e6d3f2f9e73e1bf4340dd1dd9ce1ae870dfb4e51a567cc99925348540c" gracePeriod=30 Nov 22 08:44:28 crc kubenswrapper[4853]: I1122 08:44:28.472182 4853 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5bb8bb4577-rspn5 container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": dial tcp 10.217.0.50:8081: connect: connection refused" start-of-body= Nov 22 08:44:28 crc kubenswrapper[4853]: I1122 08:44:28.473334 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" podUID="50b94c6e-d5b7-4720-af4c-8922035ca146" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": dial tcp 10.217.0.50:8081: connect: connection refused" Nov 22 08:44:29 crc kubenswrapper[4853]: I1122 08:44:29.447285 4853 generic.go:334] "Generic (PLEG): container finished" podID="b7cfa3a7-05d9-4822-9fda-8316c75ee9a4" containerID="8b72c9339106c5b3a5bc464e597d60e4d67cc93a99edeb25176c4b2bf7c2c646" exitCode=1 Nov 22 08:44:29 crc kubenswrapper[4853]: I1122 08:44:29.447353 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" event={"ID":"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4","Type":"ContainerDied","Data":"8b72c9339106c5b3a5bc464e597d60e4d67cc93a99edeb25176c4b2bf7c2c646"} Nov 22 08:44:29 crc kubenswrapper[4853]: I1122 08:44:29.448725 4853 scope.go:117] "RemoveContainer" containerID="8b72c9339106c5b3a5bc464e597d60e4d67cc93a99edeb25176c4b2bf7c2c646" Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.459858 4853 generic.go:334] "Generic (PLEG): container finished" podID="58a7dcf9-4712-4ffe-90d1-ea827dc02982" containerID="2ba3c6e6d3f2f9e73e1bf4340dd1dd9ce1ae870dfb4e51a567cc99925348540c" exitCode=0 Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.459960 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerDied","Data":"2ba3c6e6d3f2f9e73e1bf4340dd1dd9ce1ae870dfb4e51a567cc99925348540c"} Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.462693 4853 generic.go:334] "Generic (PLEG): container finished" podID="50b94c6e-d5b7-4720-af4c-8922035ca146" containerID="b00343fac87512ce675bb259b8a1f1021e60aaaea9286d4b790e5c63858ee976" exitCode=1 Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.462730 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerDied","Data":"b00343fac87512ce675bb259b8a1f1021e60aaaea9286d4b790e5c63858ee976"} Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.463698 4853 scope.go:117] "RemoveContainer" containerID="b00343fac87512ce675bb259b8a1f1021e60aaaea9286d4b790e5c63858ee976" Nov 22 08:44:30 crc kubenswrapper[4853]: I1122 08:44:30.824855 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 08:44:31 crc kubenswrapper[4853]: I1122 08:44:31.195430 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:31 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:31 crc kubenswrapper[4853]: > Nov 22 08:44:31 crc kubenswrapper[4853]: I1122 08:44:31.347686 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:44:31 crc kubenswrapper[4853]: I1122 08:44:31.474384 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" event={"ID":"b7cfa3a7-05d9-4822-9fda-8316c75ee9a4","Type":"ContainerStarted","Data":"971fc995c32875bd190b57191745451c0becb3027e2e0847d2d6473d0d89cc70"} Nov 22 08:44:31 crc kubenswrapper[4853]: I1122 08:44:31.474486 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 08:44:32 crc kubenswrapper[4853]: I1122 08:44:32.487615 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerStarted","Data":"cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7"} Nov 22 08:44:32 crc kubenswrapper[4853]: I1122 08:44:32.488423 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:44:38 crc kubenswrapper[4853]: I1122 08:44:38.473652 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:44:38 crc kubenswrapper[4853]: I1122 08:44:38.562320 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"58a7dcf9-4712-4ffe-90d1-ea827dc02982","Type":"ContainerStarted","Data":"7d86b591b1ab6674fed12805e7632df3d22b515de005412b81077ee40dc69cc2"} Nov 22 08:44:41 crc kubenswrapper[4853]: I1122 08:44:41.204190 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:41 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:41 crc kubenswrapper[4853]: > Nov 22 08:44:51 crc kubenswrapper[4853]: I1122 08:44:51.032673 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-859d4ccd9f-mfkwx" podUID="9ef15139-fdad-4e4c-a3bf-e1050c5bf716" containerName="console" containerID="cri-o://5175cf9c836352f97332a2c0c6db07457d64d6ceafeca1b01793f0c6de4f5982" gracePeriod=15 Nov 22 08:44:51 crc kubenswrapper[4853]: I1122 08:44:51.523029 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" probeResult="failure" output=< Nov 22 08:44:51 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:44:51 crc kubenswrapper[4853]: > Nov 22 08:44:51 crc kubenswrapper[4853]: I1122 08:44:51.714288 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-859d4ccd9f-mfkwx_9ef15139-fdad-4e4c-a3bf-e1050c5bf716/console/0.log" Nov 22 08:44:51 crc kubenswrapper[4853]: I1122 08:44:51.714330 4853 generic.go:334] "Generic (PLEG): container finished" podID="9ef15139-fdad-4e4c-a3bf-e1050c5bf716" containerID="5175cf9c836352f97332a2c0c6db07457d64d6ceafeca1b01793f0c6de4f5982" exitCode=2 Nov 22 08:44:51 crc kubenswrapper[4853]: I1122 08:44:51.714361 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d4ccd9f-mfkwx" event={"ID":"9ef15139-fdad-4e4c-a3bf-e1050c5bf716","Type":"ContainerDied","Data":"5175cf9c836352f97332a2c0c6db07457d64d6ceafeca1b01793f0c6de4f5982"} Nov 22 08:44:52 crc kubenswrapper[4853]: I1122 08:44:52.727607 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-859d4ccd9f-mfkwx_9ef15139-fdad-4e4c-a3bf-e1050c5bf716/console/0.log" Nov 22 08:44:52 crc kubenswrapper[4853]: I1122 08:44:52.728259 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d4ccd9f-mfkwx" event={"ID":"9ef15139-fdad-4e4c-a3bf-e1050c5bf716","Type":"ContainerStarted","Data":"ddc0150e706a5eeba12dc6c8494d24b44ca32509d9644927e22bcb1b45c63f32"} Nov 22 08:44:55 crc kubenswrapper[4853]: I1122 08:44:55.958498 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 08:44:55 crc kubenswrapper[4853]: I1122 08:44:55.959155 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 08:44:55 crc kubenswrapper[4853]: I1122 08:44:55.962202 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 08:44:56 crc kubenswrapper[4853]: I1122 08:44:56.773276 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859d4ccd9f-mfkwx" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.893875 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 08:44:59 crc kubenswrapper[4853]: E1122 08:44:59.895004 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="extract-content" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.895021 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="extract-content" Nov 22 08:44:59 crc kubenswrapper[4853]: E1122 08:44:59.895045 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="extract-utilities" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.895052 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="extract-utilities" Nov 22 08:44:59 crc kubenswrapper[4853]: E1122 08:44:59.895101 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="registry-server" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.895137 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="registry-server" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.895414 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="2258cfdf-1983-41d3-95f2-e45ee84fa2bf" containerName="registry-server" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.896378 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.900296 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.900485 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.900586 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-24rpp" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.900827 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.908525 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.977155 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.977218 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:44:59 crc kubenswrapper[4853]: I1122 08:44:59.977487 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080337 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080395 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080445 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080514 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080604 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080651 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080799 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.080886 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrb84\" (UniqueName: \"kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.081801 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.099046 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.140456 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss"] Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.142526 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.146122 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.147512 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.153643 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss"] Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.182593 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrb84\" (UniqueName: \"kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.182644 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.182668 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.183334 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.183415 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.183467 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.185046 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.185627 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.186599 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.204061 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.207077 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.207981 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrb84\" (UniqueName: \"kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.208410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.212693 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.229291 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.264068 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.288069 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.288367 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.288421 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96msn\" (UniqueName: \"kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.390718 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.390839 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.390860 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96msn\" (UniqueName: \"kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.394057 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.394662 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.407817 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96msn\" (UniqueName: \"kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn\") pod \"collect-profiles-29396685-86gss\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.444392 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.527256 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.611059 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:00 crc kubenswrapper[4853]: I1122 08:45:00.830940 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-559f7d85b8-xtjfd" Nov 22 08:45:01 crc kubenswrapper[4853]: I1122 08:45:01.265230 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 08:45:01 crc kubenswrapper[4853]: I1122 08:45:01.841812 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1f255ef5-a59e-42c4-9ac7-ff33562499f6","Type":"ContainerStarted","Data":"72d5d2d14e5f6004a4a0ed9f5350e810594a75ec1399c480ef72a06baf7d1e42"} Nov 22 08:45:01 crc kubenswrapper[4853]: I1122 08:45:01.841975 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dw2lk" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" containerID="cri-o://83258f6d4f5dd4d3c784085c33d65164c700d46ce0ece666f81046769f0f2253" gracePeriod=2 Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:02.434895 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss"] Nov 22 08:45:03 crc kubenswrapper[4853]: W1122 08:45:02.509491 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0aeaa3f_2649_4806_a7be_6c8d677b2122.slice/crio-2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7 WatchSource:0}: Error finding container 2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7: Status 404 returned error can't find the container with id 2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7 Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:02.871891 4853 generic.go:334] "Generic (PLEG): container finished" podID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerID="83258f6d4f5dd4d3c784085c33d65164c700d46ce0ece666f81046769f0f2253" exitCode=0 Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:02.871983 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerDied","Data":"83258f6d4f5dd4d3c784085c33d65164c700d46ce0ece666f81046769f0f2253"} Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:02.876622 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" event={"ID":"c0aeaa3f-2649-4806-a7be-6c8d677b2122","Type":"ContainerStarted","Data":"2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7"} Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:03.894793 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dw2lk" event={"ID":"806e347e-89bd-4d80-9ec6-8fc03f7ec454","Type":"ContainerDied","Data":"e4774934ea72d2c7f24ed506735742c4d5ba98168c7f98ba1aa9315aa6614c8d"} Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:03.895423 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4774934ea72d2c7f24ed506735742c4d5ba98168c7f98ba1aa9315aa6614c8d" Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:03.897389 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" event={"ID":"c0aeaa3f-2649-4806-a7be-6c8d677b2122","Type":"ContainerStarted","Data":"696ce928cf212f11602255a18c97d0f2e012e437e8665a30203d28b43c2bbb8d"} Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:03.919550 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" podStartSLOduration=3.919475565 podStartE2EDuration="3.919475565s" podCreationTimestamp="2025-11-22 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:45:03.916166635 +0000 UTC m=+5702.756789281" watchObservedRunningTime="2025-11-22 08:45:03.919475565 +0000 UTC m=+5702.760098191" Nov 22 08:45:03 crc kubenswrapper[4853]: I1122 08:45:03.982301 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.096032 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities\") pod \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.096502 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8n79\" (UniqueName: \"kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79\") pod \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.096568 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content\") pod \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\" (UID: \"806e347e-89bd-4d80-9ec6-8fc03f7ec454\") " Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.099667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities" (OuterVolumeSpecName: "utilities") pod "806e347e-89bd-4d80-9ec6-8fc03f7ec454" (UID: "806e347e-89bd-4d80-9ec6-8fc03f7ec454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.117031 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79" (OuterVolumeSpecName: "kube-api-access-w8n79") pod "806e347e-89bd-4d80-9ec6-8fc03f7ec454" (UID: "806e347e-89bd-4d80-9ec6-8fc03f7ec454"). InnerVolumeSpecName "kube-api-access-w8n79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.166700 4853 scope.go:117] "RemoveContainer" containerID="c2e7dcd797c2d13106efac963f3e0477dddaeb7094c226eb61d415088b524568" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.204004 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8n79\" (UniqueName: \"kubernetes.io/projected/806e347e-89bd-4d80-9ec6-8fc03f7ec454-kube-api-access-w8n79\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.204071 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.205336 4853 scope.go:117] "RemoveContainer" containerID="68c70671da02a58f148b3b6ba61e1ada3c9e9b7c7c90323ab2d5f5e2d1733c69" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.230214 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "806e347e-89bd-4d80-9ec6-8fc03f7ec454" (UID: "806e347e-89bd-4d80-9ec6-8fc03f7ec454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.244269 4853 scope.go:117] "RemoveContainer" containerID="e4078b22bc3334e97a8cf0dcc080770f995786bf11697d4de42d5b542e0989e2" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.307299 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/806e347e-89bd-4d80-9ec6-8fc03f7ec454-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.913455 4853 generic.go:334] "Generic (PLEG): container finished" podID="c0aeaa3f-2649-4806-a7be-6c8d677b2122" containerID="696ce928cf212f11602255a18c97d0f2e012e437e8665a30203d28b43c2bbb8d" exitCode=0 Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.913525 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" event={"ID":"c0aeaa3f-2649-4806-a7be-6c8d677b2122","Type":"ContainerDied","Data":"696ce928cf212f11602255a18c97d0f2e012e437e8665a30203d28b43c2bbb8d"} Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.913536 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dw2lk" Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.967917 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:45:04 crc kubenswrapper[4853]: I1122 08:45:04.977859 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dw2lk"] Nov 22 08:45:05 crc kubenswrapper[4853]: I1122 08:45:05.792162 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" path="/var/lib/kubelet/pods/806e347e-89bd-4d80-9ec6-8fc03f7ec454/volumes" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.390695 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.492735 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume\") pod \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.492817 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume\") pod \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.492994 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96msn\" (UniqueName: \"kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn\") pod \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\" (UID: \"c0aeaa3f-2649-4806-a7be-6c8d677b2122\") " Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.541421 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c0aeaa3f-2649-4806-a7be-6c8d677b2122" (UID: "c0aeaa3f-2649-4806-a7be-6c8d677b2122"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.542359 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume" (OuterVolumeSpecName: "config-volume") pod "c0aeaa3f-2649-4806-a7be-6c8d677b2122" (UID: "c0aeaa3f-2649-4806-a7be-6c8d677b2122"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.570264 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn" (OuterVolumeSpecName: "kube-api-access-96msn") pod "c0aeaa3f-2649-4806-a7be-6c8d677b2122" (UID: "c0aeaa3f-2649-4806-a7be-6c8d677b2122"). InnerVolumeSpecName "kube-api-access-96msn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.597148 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96msn\" (UniqueName: \"kubernetes.io/projected/c0aeaa3f-2649-4806-a7be-6c8d677b2122-kube-api-access-96msn\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.597190 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0aeaa3f-2649-4806-a7be-6c8d677b2122-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:14 crc kubenswrapper[4853]: I1122 08:45:14.597201 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c0aeaa3f-2649-4806-a7be-6c8d677b2122-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:15 crc kubenswrapper[4853]: I1122 08:45:15.059522 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" event={"ID":"c0aeaa3f-2649-4806-a7be-6c8d677b2122","Type":"ContainerDied","Data":"2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7"} Nov 22 08:45:15 crc kubenswrapper[4853]: I1122 08:45:15.059601 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f2b0fc7fd68e6f96035356c6631a74eb5684e461180402b2dd7e933ee4138e7" Nov 22 08:45:15 crc kubenswrapper[4853]: I1122 08:45:15.060205 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-86gss" Nov 22 08:45:15 crc kubenswrapper[4853]: I1122 08:45:15.468493 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs"] Nov 22 08:45:15 crc kubenswrapper[4853]: I1122 08:45:15.479811 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-t6gcs"] Nov 22 08:45:16 crc kubenswrapper[4853]: I1122 08:45:16.008248 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5aafdf2-b9e2-4c2a-b418-2493c8352c40" path="/var/lib/kubelet/pods/c5aafdf2-b9e2-4c2a-b418-2493c8352c40/volumes" Nov 22 08:45:53 crc kubenswrapper[4853]: E1122 08:45:53.145341 4853 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 22 08:45:53 crc kubenswrapper[4853]: E1122 08:45:53.147368 4853 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrb84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(1f255ef5-a59e-42c4-9ac7-ff33562499f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 08:45:53 crc kubenswrapper[4853]: E1122 08:45:53.148619 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" Nov 22 08:45:53 crc kubenswrapper[4853]: E1122 08:45:53.547499 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" Nov 22 08:46:01 crc kubenswrapper[4853]: I1122 08:46:01.297332 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:46:01 crc kubenswrapper[4853]: I1122 08:46:01.298740 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:46:04 crc kubenswrapper[4853]: I1122 08:46:04.517593 4853 scope.go:117] "RemoveContainer" containerID="bc4347ff09ef41b0779c4a9b2e5ed2b5b08e7cab79a1bbfa90f1e173fd5d464b" Nov 22 08:46:08 crc kubenswrapper[4853]: I1122 08:46:08.309056 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 22 08:46:10 crc kubenswrapper[4853]: I1122 08:46:10.738676 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1f255ef5-a59e-42c4-9ac7-ff33562499f6","Type":"ContainerStarted","Data":"3f318f57cf513d5d41a7316de76dd2a62ca00827f327fe62a139e5ac93545688"} Nov 22 08:46:31 crc kubenswrapper[4853]: I1122 08:46:31.298392 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:46:31 crc kubenswrapper[4853]: I1122 08:46:31.299369 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:47:01 crc kubenswrapper[4853]: I1122 08:47:01.297849 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:47:01 crc kubenswrapper[4853]: I1122 08:47:01.298932 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:47:01 crc kubenswrapper[4853]: I1122 08:47:01.299015 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:47:01 crc kubenswrapper[4853]: I1122 08:47:01.300660 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:47:01 crc kubenswrapper[4853]: I1122 08:47:01.300878 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc" gracePeriod=600 Nov 22 08:47:02 crc kubenswrapper[4853]: I1122 08:47:02.310483 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc" exitCode=0 Nov 22 08:47:02 crc kubenswrapper[4853]: I1122 08:47:02.311045 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc"} Nov 22 08:47:02 crc kubenswrapper[4853]: I1122 08:47:02.311097 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a"} Nov 22 08:47:02 crc kubenswrapper[4853]: I1122 08:47:02.311115 4853 scope.go:117] "RemoveContainer" containerID="ac53dccac3e4e33ec75240cd8fb4b48a597324a88a76775452fb92dfc45a4092" Nov 22 08:47:02 crc kubenswrapper[4853]: I1122 08:47:02.336846 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=57.400272206 podStartE2EDuration="2m4.336823633s" podCreationTimestamp="2025-11-22 08:44:58 +0000 UTC" firstStartedPulling="2025-11-22 08:45:01.369632469 +0000 UTC m=+5700.210255095" lastFinishedPulling="2025-11-22 08:46:08.306183896 +0000 UTC m=+5767.146806522" observedRunningTime="2025-11-22 08:46:10.758402389 +0000 UTC m=+5769.599025025" watchObservedRunningTime="2025-11-22 08:47:02.336823633 +0000 UTC m=+5821.177446259" Nov 22 08:49:01 crc kubenswrapper[4853]: I1122 08:49:01.297502 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:49:01 crc kubenswrapper[4853]: I1122 08:49:01.298153 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:49:31 crc kubenswrapper[4853]: I1122 08:49:31.297931 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:49:31 crc kubenswrapper[4853]: I1122 08:49:31.298615 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:50:01 crc kubenswrapper[4853]: I1122 08:50:01.297771 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:50:01 crc kubenswrapper[4853]: I1122 08:50:01.298413 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:50:01 crc kubenswrapper[4853]: I1122 08:50:01.298468 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:50:01 crc kubenswrapper[4853]: I1122 08:50:01.300325 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:50:01 crc kubenswrapper[4853]: I1122 08:50:01.301718 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" gracePeriod=600 Nov 22 08:50:01 crc kubenswrapper[4853]: E1122 08:50:01.438800 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:02 crc kubenswrapper[4853]: I1122 08:50:02.346599 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" exitCode=0 Nov 22 08:50:02 crc kubenswrapper[4853]: I1122 08:50:02.347639 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a"} Nov 22 08:50:02 crc kubenswrapper[4853]: I1122 08:50:02.351358 4853 scope.go:117] "RemoveContainer" containerID="7beb6a25b69c936fa0660f13cb2d535b9bdcbbfde9cf3aee153d121ad7dacedc" Nov 22 08:50:02 crc kubenswrapper[4853]: I1122 08:50:02.352087 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:50:02 crc kubenswrapper[4853]: E1122 08:50:02.353055 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:04 crc kubenswrapper[4853]: I1122 08:50:04.967126 4853 scope.go:117] "RemoveContainer" containerID="8047665eae86957d583624bdef178f9b3d691818bdf1573ccb025bebe5a44f46" Nov 22 08:50:07 crc kubenswrapper[4853]: I1122 08:50:07.411097 4853 scope.go:117] "RemoveContainer" containerID="3e09dfe949d1c60a20b838aaec683114c3f940d9785ff0111c7562405d5f0b5e" Nov 22 08:50:07 crc kubenswrapper[4853]: I1122 08:50:07.450915 4853 scope.go:117] "RemoveContainer" containerID="83258f6d4f5dd4d3c784085c33d65164c700d46ce0ece666f81046769f0f2253" Nov 22 08:50:12 crc kubenswrapper[4853]: I1122 08:50:12.747490 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:50:12 crc kubenswrapper[4853]: E1122 08:50:12.748448 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.507295 4853 trace.go:236] Trace[1189389017]: "Calculate volume metrics of utilities for pod openshift-marketplace/redhat-marketplace-4mqss" (22-Nov-2025 08:50:14.182) (total time: 14242ms): Nov 22 08:50:28 crc kubenswrapper[4853]: Trace[1189389017]: [14.242581469s] [14.242581469s] END Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.549584 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 13.300055554s: [/var/lib/containers/storage/overlay/8ff620669daef9c40b201cb5e05a66dc6902d56bdc4d73508c294e91a525f1d1/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.507259 4853 trace.go:236] Trace[85949309]: "Calculate volume metrics of metrics-client-ca for pod openshift-monitoring/thanos-querier-57797c7b65-9s8jq" (22-Nov-2025 08:50:13.823) (total time: 14562ms): Nov 22 08:50:28 crc kubenswrapper[4853]: Trace[85949309]: [14.562018049s] [14.562018049s] END Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.773593 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 11.496167571s: [/var/lib/containers/storage/overlay/4819b97132502767d0d2469848febc1b06a04143a933df075e6169d8df12e5a6/diff /var/log/pods/openstack_nova-scheduler-0_91458107-9648-4958-ae6c-54457f8744f6/nova-scheduler-scheduler/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.774525 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 11.188778426s: [/var/lib/containers/storage/overlay/c1e007fdd3768abeb0a7f2ad7965604820c42154db206cb495327f05b51b74b3/diff /var/log/pods/openstack_nova-metadata-0_9292105c-7a7d-42cf-a8a1-6074ebebc6f4/nova-metadata-metadata/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.807899 4853 trace.go:236] Trace[1802105533]: "iptables ChainExists" (22-Nov-2025 08:50:15.750) (total time: 13057ms): Nov 22 08:50:28 crc kubenswrapper[4853]: Trace[1802105533]: [13.05731742s] [13.05731742s] END Nov 22 08:50:28 crc kubenswrapper[4853]: I1122 08:50:28.860213 4853 trace.go:236] Trace[539491430]: "iptables ChainExists" (22-Nov-2025 08:50:15.754) (total time: 13105ms): Nov 22 08:50:28 crc kubenswrapper[4853]: Trace[539491430]: [13.105369956s] [13.105369956s] END Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:28.999109 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 16.15326832s: [/var/lib/containers/storage/overlay/370e7bfbc4e335e93fc206eb1ccd2fea2a29086434fd7dd736832fd7d1f76091/diff /var/log/pods/openstack-operators_openstack-operator-controller-manager-88b7b5d44-zjv7m_b41bf5e6-516e-40b8-9628-bb2f056af5ad/kube-rbac-proxy/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.037307 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 11.718499623s: [/var/lib/containers/storage/overlay/f632f0992953a47f4828ad0544f31972fce03171aab38996ac08a50498ed0d7f/diff /var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-fdt65_131c2522-8c48-4c18-9a39-99a66b87b9ed/operator/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.038235 4853 trace.go:236] Trace[727507625]: "Calculate volume metrics of metrics-client-ca for pod openshift-monitoring/openshift-state-metrics-566fddb674-949qz" (22-Nov-2025 08:50:17.131) (total time: 11907ms): Nov 22 08:50:29 crc kubenswrapper[4853]: Trace[727507625]: [11.907006305s] [11.907006305s] END Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.053524 4853 patch_prober.go:28] interesting pod/logging-loki-gateway-76bd965446-l8bwp container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.77:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.053585 4853 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76bd965446-l8bwp" podUID="5729c668-8833-48b4-9e48-bcf753621ff7" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.77:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.058881 4853 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-dbd5p container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.073816 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-dbd5p" podUID="90eeaa0a-6939-40a5-821c-82579c812f3b" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.224597 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.75788808s: [/var/lib/containers/storage/overlay/3450ce3e58e3f8b6c916c8b1ed471e6c35154ed6c43814b5c13870ac06459e39/diff /var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-dbd5p_90eeaa0a-6939-40a5-821c-82579c812f3b/authentication-operator/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.238266 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.283226416s: [/var/lib/containers/storage/overlay/e9f6c9034946cc6d7fdc2998410c57e43be3aec3d07af00d6d24bfa864bae654/diff /var/log/pods/openstack_neutron-7c78d4ccd7-pvf4q_47723ce1-f48e-4d1d-a0a8-4f49dfce7070/neutron-api/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.317972 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:50:29 crc kubenswrapper[4853]: E1122 08:50:29.318242 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.473957 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.980278195s: [/var/lib/containers/storage/overlay/1d81c00ed84c119064644a738883366722a055f000e87eb79bfe766e10c5d632/diff /var/log/pods/openshift-image-registry_cluster-image-registry-operator-dc59b4c8b-wzbj5_2715796f-e4b0-4400-a02c-a485171a9858/cluster-image-registry-operator/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.480369 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.145803252s: [/var/lib/containers/storage/overlay/e3c3cfa9b625d01cfb558e9f84685e4da08d976caae14f883a5a7df0f5f7be77/diff /var/log/pods/openstack_glance-default-external-api-0_5554d3b5-8219-4dc0-9f3e-cb1ee319ef72/glance-httpd/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:29 crc kubenswrapper[4853]: I1122 08:50:29.498389 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.790339115s: [/var/lib/containers/storage/overlay/449e163a10a93cac79aebdbd95649390d5d1ed65c99a024954ef49de5d6fa1ef/diff /var/log/pods/openshift-console_downloads-7954f5f757-hpb7j_bcd72804-cd09-4ec3-ae4a-f539958eb90c/download-server/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 08:50:30 crc kubenswrapper[4853]: I1122 08:50:30.208692 4853 trace.go:236] Trace[1260605735]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-m28tt" (22-Nov-2025 08:50:28.664) (total time: 1544ms): Nov 22 08:50:30 crc kubenswrapper[4853]: Trace[1260605735]: [1.544383098s] [1.544383098s] END Nov 22 08:50:31 crc kubenswrapper[4853]: I1122 08:50:31.141627 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerDied","Data":"cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7"} Nov 22 08:50:31 crc kubenswrapper[4853]: I1122 08:50:31.142004 4853 scope.go:117] "RemoveContainer" containerID="b00343fac87512ce675bb259b8a1f1021e60aaaea9286d4b790e5c63858ee976" Nov 22 08:50:31 crc kubenswrapper[4853]: I1122 08:50:31.141565 4853 generic.go:334] "Generic (PLEG): container finished" podID="50b94c6e-d5b7-4720-af4c-8922035ca146" containerID="cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7" exitCode=1 Nov 22 08:50:31 crc kubenswrapper[4853]: I1122 08:50:31.143113 4853 scope.go:117] "RemoveContainer" containerID="cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7" Nov 22 08:50:31 crc kubenswrapper[4853]: E1122 08:50:31.143506 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=loki-operator-controller-manager-5bb8bb4577-rspn5_openshift-operators-redhat(50b94c6e-d5b7-4720-af4c-8922035ca146)\"" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" podUID="50b94c6e-d5b7-4720-af4c-8922035ca146" Nov 22 08:50:38 crc kubenswrapper[4853]: I1122 08:50:38.471824 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:50:38 crc kubenswrapper[4853]: I1122 08:50:38.472563 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:50:38 crc kubenswrapper[4853]: I1122 08:50:38.473713 4853 scope.go:117] "RemoveContainer" containerID="cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7" Nov 22 08:50:38 crc kubenswrapper[4853]: E1122 08:50:38.474385 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=loki-operator-controller-manager-5bb8bb4577-rspn5_openshift-operators-redhat(50b94c6e-d5b7-4720-af4c-8922035ca146)\"" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" podUID="50b94c6e-d5b7-4720-af4c-8922035ca146" Nov 22 08:50:40 crc kubenswrapper[4853]: I1122 08:50:40.750198 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:50:40 crc kubenswrapper[4853]: E1122 08:50:40.750990 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:52 crc kubenswrapper[4853]: I1122 08:50:52.748397 4853 scope.go:117] "RemoveContainer" containerID="cb67ae2db58af05dc81a47de9a6ec062d18fc3a21dc8cbc43dc21acb3006a5d7" Nov 22 08:50:53 crc kubenswrapper[4853]: I1122 08:50:53.436334 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" event={"ID":"50b94c6e-d5b7-4720-af4c-8922035ca146","Type":"ContainerStarted","Data":"b93c62828857d3eda143bbc5f1a4052f22ab933b86d3150487eb95d046729897"} Nov 22 08:50:53 crc kubenswrapper[4853]: I1122 08:50:53.436886 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:50:55 crc kubenswrapper[4853]: I1122 08:50:55.756197 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:50:55 crc kubenswrapper[4853]: E1122 08:50:55.757678 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:50:58 crc kubenswrapper[4853]: I1122 08:50:58.473469 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5bb8bb4577-rspn5" Nov 22 08:51:08 crc kubenswrapper[4853]: I1122 08:51:08.748613 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:51:08 crc kubenswrapper[4853]: E1122 08:51:08.749509 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:51:22 crc kubenswrapper[4853]: I1122 08:51:22.747503 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:51:22 crc kubenswrapper[4853]: E1122 08:51:22.748459 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:51:34 crc kubenswrapper[4853]: I1122 08:51:34.748593 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:51:34 crc kubenswrapper[4853]: E1122 08:51:34.749638 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:51:45 crc kubenswrapper[4853]: I1122 08:51:45.758285 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:51:45 crc kubenswrapper[4853]: E1122 08:51:45.759231 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:51:56 crc kubenswrapper[4853]: I1122 08:51:56.747619 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:51:56 crc kubenswrapper[4853]: E1122 08:51:56.748654 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:52:10 crc kubenswrapper[4853]: I1122 08:52:10.748404 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:52:10 crc kubenswrapper[4853]: E1122 08:52:10.749326 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:52:25 crc kubenswrapper[4853]: I1122 08:52:25.758682 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:52:25 crc kubenswrapper[4853]: E1122 08:52:25.759489 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:52:39 crc kubenswrapper[4853]: I1122 08:52:39.748482 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:52:39 crc kubenswrapper[4853]: E1122 08:52:39.749498 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:52:52 crc kubenswrapper[4853]: I1122 08:52:52.747344 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:52:52 crc kubenswrapper[4853]: E1122 08:52:52.748347 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:53:07 crc kubenswrapper[4853]: I1122 08:53:07.748808 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:53:07 crc kubenswrapper[4853]: E1122 08:53:07.749951 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:53:19 crc kubenswrapper[4853]: I1122 08:53:19.747689 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:53:19 crc kubenswrapper[4853]: E1122 08:53:19.748616 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:53:34 crc kubenswrapper[4853]: I1122 08:53:34.748294 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:53:34 crc kubenswrapper[4853]: E1122 08:53:34.749306 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.825414 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:53:39 crc kubenswrapper[4853]: E1122 08:53:39.834545 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0aeaa3f-2649-4806-a7be-6c8d677b2122" containerName="collect-profiles" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.834593 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0aeaa3f-2649-4806-a7be-6c8d677b2122" containerName="collect-profiles" Nov 22 08:53:39 crc kubenswrapper[4853]: E1122 08:53:39.834635 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="extract-utilities" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.834643 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="extract-utilities" Nov 22 08:53:39 crc kubenswrapper[4853]: E1122 08:53:39.834660 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.834666 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" Nov 22 08:53:39 crc kubenswrapper[4853]: E1122 08:53:39.834688 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="extract-content" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.834693 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="extract-content" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.836163 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0aeaa3f-2649-4806-a7be-6c8d677b2122" containerName="collect-profiles" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.836241 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="806e347e-89bd-4d80-9ec6-8fc03f7ec454" containerName="registry-server" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.859080 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.889960 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.931711 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.931794 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjzfb\" (UniqueName: \"kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:39 crc kubenswrapper[4853]: I1122 08:53:39.931850 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.034571 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.034633 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjzfb\" (UniqueName: \"kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.034673 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.035322 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.035895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.073640 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjzfb\" (UniqueName: \"kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb\") pod \"certified-operators-6pckm\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:40 crc kubenswrapper[4853]: I1122 08:53:40.188384 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:41 crc kubenswrapper[4853]: I1122 08:53:41.786867 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:53:42 crc kubenswrapper[4853]: I1122 08:53:42.310787 4853 generic.go:334] "Generic (PLEG): container finished" podID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerID="74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74" exitCode=0 Nov 22 08:53:42 crc kubenswrapper[4853]: I1122 08:53:42.310844 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerDied","Data":"74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74"} Nov 22 08:53:42 crc kubenswrapper[4853]: I1122 08:53:42.311415 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerStarted","Data":"acda9a0b6018eb5042694420ca93f6f3bf9b14c943b86251e45c8d69861a12cb"} Nov 22 08:53:42 crc kubenswrapper[4853]: I1122 08:53:42.323987 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:53:44 crc kubenswrapper[4853]: I1122 08:53:44.343652 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerStarted","Data":"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b"} Nov 22 08:53:45 crc kubenswrapper[4853]: I1122 08:53:45.360311 4853 generic.go:334] "Generic (PLEG): container finished" podID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerID="6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b" exitCode=0 Nov 22 08:53:45 crc kubenswrapper[4853]: I1122 08:53:45.360356 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerDied","Data":"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b"} Nov 22 08:53:46 crc kubenswrapper[4853]: I1122 08:53:46.376875 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerStarted","Data":"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a"} Nov 22 08:53:46 crc kubenswrapper[4853]: I1122 08:53:46.401853 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6pckm" podStartSLOduration=3.870906074 podStartE2EDuration="7.400879572s" podCreationTimestamp="2025-11-22 08:53:39 +0000 UTC" firstStartedPulling="2025-11-22 08:53:42.312976235 +0000 UTC m=+6221.153598861" lastFinishedPulling="2025-11-22 08:53:45.842949733 +0000 UTC m=+6224.683572359" observedRunningTime="2025-11-22 08:53:46.392533827 +0000 UTC m=+6225.233156453" watchObservedRunningTime="2025-11-22 08:53:46.400879572 +0000 UTC m=+6225.241502188" Nov 22 08:53:48 crc kubenswrapper[4853]: I1122 08:53:48.747630 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:53:48 crc kubenswrapper[4853]: E1122 08:53:48.748266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:53:50 crc kubenswrapper[4853]: I1122 08:53:50.189198 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:50 crc kubenswrapper[4853]: I1122 08:53:50.189538 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:53:50 crc kubenswrapper[4853]: I1122 08:53:50.245123 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:54:00 crc kubenswrapper[4853]: I1122 08:54:00.253597 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:54:00 crc kubenswrapper[4853]: I1122 08:54:00.310177 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:54:00 crc kubenswrapper[4853]: I1122 08:54:00.534796 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6pckm" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="registry-server" containerID="cri-o://63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a" gracePeriod=2 Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.103124 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.182209 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content\") pod \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.182270 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjzfb\" (UniqueName: \"kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb\") pod \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.182373 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities\") pod \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\" (UID: \"da1b47df-2ab1-4656-a5c7-2362f30cdd75\") " Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.186760 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities" (OuterVolumeSpecName: "utilities") pod "da1b47df-2ab1-4656-a5c7-2362f30cdd75" (UID: "da1b47df-2ab1-4656-a5c7-2362f30cdd75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.196890 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb" (OuterVolumeSpecName: "kube-api-access-jjzfb") pod "da1b47df-2ab1-4656-a5c7-2362f30cdd75" (UID: "da1b47df-2ab1-4656-a5c7-2362f30cdd75"). InnerVolumeSpecName "kube-api-access-jjzfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.261372 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da1b47df-2ab1-4656-a5c7-2362f30cdd75" (UID: "da1b47df-2ab1-4656-a5c7-2362f30cdd75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.286035 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.286083 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da1b47df-2ab1-4656-a5c7-2362f30cdd75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.286099 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjzfb\" (UniqueName: \"kubernetes.io/projected/da1b47df-2ab1-4656-a5c7-2362f30cdd75-kube-api-access-jjzfb\") on node \"crc\" DevicePath \"\"" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.548965 4853 generic.go:334] "Generic (PLEG): container finished" podID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerID="63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a" exitCode=0 Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.549040 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerDied","Data":"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a"} Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.549087 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6pckm" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.549102 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6pckm" event={"ID":"da1b47df-2ab1-4656-a5c7-2362f30cdd75","Type":"ContainerDied","Data":"acda9a0b6018eb5042694420ca93f6f3bf9b14c943b86251e45c8d69861a12cb"} Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.549128 4853 scope.go:117] "RemoveContainer" containerID="63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.573662 4853 scope.go:117] "RemoveContainer" containerID="6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.601915 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.612617 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6pckm"] Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.626083 4853 scope.go:117] "RemoveContainer" containerID="74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.675496 4853 scope.go:117] "RemoveContainer" containerID="63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a" Nov 22 08:54:01 crc kubenswrapper[4853]: E1122 08:54:01.680105 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a\": container with ID starting with 63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a not found: ID does not exist" containerID="63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.681080 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a"} err="failed to get container status \"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a\": rpc error: code = NotFound desc = could not find container \"63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a\": container with ID starting with 63485718d3d681dc8cc9f8a913c9d3a49f29f4e851ee3995869d9bda6cdda35a not found: ID does not exist" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.681191 4853 scope.go:117] "RemoveContainer" containerID="6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b" Nov 22 08:54:01 crc kubenswrapper[4853]: E1122 08:54:01.682061 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b\": container with ID starting with 6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b not found: ID does not exist" containerID="6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.682085 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b"} err="failed to get container status \"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b\": rpc error: code = NotFound desc = could not find container \"6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b\": container with ID starting with 6f315086fa5686d17dab2e13684cccef3ed73cd14bd322a2459709a3dac6d65b not found: ID does not exist" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.682113 4853 scope.go:117] "RemoveContainer" containerID="74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74" Nov 22 08:54:01 crc kubenswrapper[4853]: E1122 08:54:01.683005 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74\": container with ID starting with 74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74 not found: ID does not exist" containerID="74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.683028 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74"} err="failed to get container status \"74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74\": rpc error: code = NotFound desc = could not find container \"74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74\": container with ID starting with 74198e511a522b227ecef18618d8a1c865df49184f989d31c57b0e3b591c9f74 not found: ID does not exist" Nov 22 08:54:01 crc kubenswrapper[4853]: I1122 08:54:01.762452 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" path="/var/lib/kubelet/pods/da1b47df-2ab1-4656-a5c7-2362f30cdd75/volumes" Nov 22 08:54:02 crc kubenswrapper[4853]: I1122 08:54:02.747435 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:54:02 crc kubenswrapper[4853]: E1122 08:54:02.747971 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:54:14 crc kubenswrapper[4853]: I1122 08:54:14.748474 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:54:14 crc kubenswrapper[4853]: E1122 08:54:14.749333 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:54:27 crc kubenswrapper[4853]: I1122 08:54:27.748866 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:54:27 crc kubenswrapper[4853]: E1122 08:54:27.749958 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:54:40 crc kubenswrapper[4853]: I1122 08:54:40.748155 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:54:40 crc kubenswrapper[4853]: E1122 08:54:40.748986 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:54:51 crc kubenswrapper[4853]: I1122 08:54:51.748946 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:54:51 crc kubenswrapper[4853]: E1122 08:54:51.749847 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 08:55:03 crc kubenswrapper[4853]: I1122 08:55:03.748518 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:55:04 crc kubenswrapper[4853]: I1122 08:55:04.259385 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00"} Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.062626 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:56:24 crc kubenswrapper[4853]: E1122 08:56:24.064654 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="extract-content" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.064683 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="extract-content" Nov 22 08:56:24 crc kubenswrapper[4853]: E1122 08:56:24.064724 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="extract-utilities" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.064732 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="extract-utilities" Nov 22 08:56:24 crc kubenswrapper[4853]: E1122 08:56:24.064770 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="registry-server" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.064776 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="registry-server" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.065073 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1b47df-2ab1-4656-a5c7-2362f30cdd75" containerName="registry-server" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.070639 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.083161 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.187979 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.188035 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.188226 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5nh\" (UniqueName: \"kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.290894 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.290976 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.291451 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.291698 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.291972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5nh\" (UniqueName: \"kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.315316 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5nh\" (UniqueName: \"kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh\") pod \"redhat-operators-rx6ds\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.402660 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:24 crc kubenswrapper[4853]: I1122 08:56:24.916066 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:56:25 crc kubenswrapper[4853]: I1122 08:56:25.451846 4853 generic.go:334] "Generic (PLEG): container finished" podID="b5e577cb-0c98-4add-aa84-805a76b20439" containerID="2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265" exitCode=0 Nov 22 08:56:25 crc kubenswrapper[4853]: I1122 08:56:25.451971 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerDied","Data":"2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265"} Nov 22 08:56:25 crc kubenswrapper[4853]: I1122 08:56:25.452226 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerStarted","Data":"2111cd7e6b680c636e434d3878e2ce42b3932836ba079502e0197f214db2e9fc"} Nov 22 08:56:26 crc kubenswrapper[4853]: I1122 08:56:26.467365 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerStarted","Data":"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663"} Nov 22 08:56:31 crc kubenswrapper[4853]: I1122 08:56:31.523840 4853 generic.go:334] "Generic (PLEG): container finished" podID="b5e577cb-0c98-4add-aa84-805a76b20439" containerID="3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663" exitCode=0 Nov 22 08:56:31 crc kubenswrapper[4853]: I1122 08:56:31.524114 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerDied","Data":"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663"} Nov 22 08:56:32 crc kubenswrapper[4853]: I1122 08:56:32.537077 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerStarted","Data":"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f"} Nov 22 08:56:32 crc kubenswrapper[4853]: I1122 08:56:32.560609 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rx6ds" podStartSLOduration=1.943776993 podStartE2EDuration="8.560591155s" podCreationTimestamp="2025-11-22 08:56:24 +0000 UTC" firstStartedPulling="2025-11-22 08:56:25.453887248 +0000 UTC m=+6384.294509874" lastFinishedPulling="2025-11-22 08:56:32.07070141 +0000 UTC m=+6390.911324036" observedRunningTime="2025-11-22 08:56:32.552616579 +0000 UTC m=+6391.393239205" watchObservedRunningTime="2025-11-22 08:56:32.560591155 +0000 UTC m=+6391.401213781" Nov 22 08:56:34 crc kubenswrapper[4853]: I1122 08:56:34.403801 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:34 crc kubenswrapper[4853]: I1122 08:56:34.404337 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:56:35 crc kubenswrapper[4853]: I1122 08:56:35.463213 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx6ds" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" probeResult="failure" output=< Nov 22 08:56:35 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:56:35 crc kubenswrapper[4853]: > Nov 22 08:56:45 crc kubenswrapper[4853]: I1122 08:56:45.453059 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx6ds" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" probeResult="failure" output=< Nov 22 08:56:45 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:56:45 crc kubenswrapper[4853]: > Nov 22 08:56:55 crc kubenswrapper[4853]: I1122 08:56:55.463924 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx6ds" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" probeResult="failure" output=< Nov 22 08:56:55 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:56:55 crc kubenswrapper[4853]: > Nov 22 08:57:04 crc kubenswrapper[4853]: I1122 08:57:04.454898 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:57:04 crc kubenswrapper[4853]: I1122 08:57:04.512656 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:57:04 crc kubenswrapper[4853]: I1122 08:57:04.698544 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:57:05 crc kubenswrapper[4853]: I1122 08:57:05.882622 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rx6ds" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" containerID="cri-o://3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f" gracePeriod=2 Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.413729 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.511643 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities\") pod \"b5e577cb-0c98-4add-aa84-805a76b20439\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.511805 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5nh\" (UniqueName: \"kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh\") pod \"b5e577cb-0c98-4add-aa84-805a76b20439\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.511920 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content\") pod \"b5e577cb-0c98-4add-aa84-805a76b20439\" (UID: \"b5e577cb-0c98-4add-aa84-805a76b20439\") " Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.512351 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities" (OuterVolumeSpecName: "utilities") pod "b5e577cb-0c98-4add-aa84-805a76b20439" (UID: "b5e577cb-0c98-4add-aa84-805a76b20439"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.512868 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.517799 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh" (OuterVolumeSpecName: "kube-api-access-kt5nh") pod "b5e577cb-0c98-4add-aa84-805a76b20439" (UID: "b5e577cb-0c98-4add-aa84-805a76b20439"). InnerVolumeSpecName "kube-api-access-kt5nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.590616 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5e577cb-0c98-4add-aa84-805a76b20439" (UID: "b5e577cb-0c98-4add-aa84-805a76b20439"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.615347 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5nh\" (UniqueName: \"kubernetes.io/projected/b5e577cb-0c98-4add-aa84-805a76b20439-kube-api-access-kt5nh\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.615385 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5e577cb-0c98-4add-aa84-805a76b20439-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.895311 4853 generic.go:334] "Generic (PLEG): container finished" podID="b5e577cb-0c98-4add-aa84-805a76b20439" containerID="3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f" exitCode=0 Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.895357 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerDied","Data":"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f"} Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.895376 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx6ds" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.895414 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx6ds" event={"ID":"b5e577cb-0c98-4add-aa84-805a76b20439","Type":"ContainerDied","Data":"2111cd7e6b680c636e434d3878e2ce42b3932836ba079502e0197f214db2e9fc"} Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.895443 4853 scope.go:117] "RemoveContainer" containerID="3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.941985 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.944129 4853 scope.go:117] "RemoveContainer" containerID="3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663" Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.954238 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rx6ds"] Nov 22 08:57:06 crc kubenswrapper[4853]: I1122 08:57:06.969297 4853 scope.go:117] "RemoveContainer" containerID="2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.023913 4853 scope.go:117] "RemoveContainer" containerID="3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f" Nov 22 08:57:07 crc kubenswrapper[4853]: E1122 08:57:07.024609 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f\": container with ID starting with 3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f not found: ID does not exist" containerID="3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.024680 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f"} err="failed to get container status \"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f\": rpc error: code = NotFound desc = could not find container \"3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f\": container with ID starting with 3422b18940df30318cf84f762f3eb78bd3772351d81bfeb9ad09788096ec622f not found: ID does not exist" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.024713 4853 scope.go:117] "RemoveContainer" containerID="3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663" Nov 22 08:57:07 crc kubenswrapper[4853]: E1122 08:57:07.025165 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663\": container with ID starting with 3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663 not found: ID does not exist" containerID="3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.025267 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663"} err="failed to get container status \"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663\": rpc error: code = NotFound desc = could not find container \"3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663\": container with ID starting with 3a5df4fd14edd3d9d1101cc83b7b2d45012f3ae88039c1e2a2f1d59d133eb663 not found: ID does not exist" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.025337 4853 scope.go:117] "RemoveContainer" containerID="2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265" Nov 22 08:57:07 crc kubenswrapper[4853]: E1122 08:57:07.025727 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265\": container with ID starting with 2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265 not found: ID does not exist" containerID="2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.025802 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265"} err="failed to get container status \"2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265\": rpc error: code = NotFound desc = could not find container \"2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265\": container with ID starting with 2b23551a5099c8b06b1dbfbe9c7516955213e886264e7a54248560979689f265 not found: ID does not exist" Nov 22 08:57:07 crc kubenswrapper[4853]: I1122 08:57:07.761835 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" path="/var/lib/kubelet/pods/b5e577cb-0c98-4add-aa84-805a76b20439/volumes" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.736999 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:19 crc kubenswrapper[4853]: E1122 08:57:19.738030 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="extract-content" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.738047 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="extract-content" Nov 22 08:57:19 crc kubenswrapper[4853]: E1122 08:57:19.738064 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.738071 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" Nov 22 08:57:19 crc kubenswrapper[4853]: E1122 08:57:19.738093 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="extract-utilities" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.738103 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="extract-utilities" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.738411 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e577cb-0c98-4add-aa84-805a76b20439" containerName="registry-server" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.741488 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.762528 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.835385 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx4vr\" (UniqueName: \"kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.835551 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.835597 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.937559 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx4vr\" (UniqueName: \"kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.937679 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.937719 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.938284 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.938337 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:19 crc kubenswrapper[4853]: I1122 08:57:19.960195 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx4vr\" (UniqueName: \"kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr\") pod \"redhat-marketplace-z5k6s\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.072415 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.345773 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.349409 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.362450 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.450210 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.451095 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9txcw\" (UniqueName: \"kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.451358 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.554223 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9txcw\" (UniqueName: \"kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.554306 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.554371 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.555085 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.555315 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.578287 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9txcw\" (UniqueName: \"kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw\") pod \"community-operators-5qvlw\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.584839 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:20 crc kubenswrapper[4853]: I1122 08:57:20.685559 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:21 crc kubenswrapper[4853]: I1122 08:57:21.052266 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerID="1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2" exitCode=0 Nov 22 08:57:21 crc kubenswrapper[4853]: I1122 08:57:21.052312 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerDied","Data":"1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2"} Nov 22 08:57:21 crc kubenswrapper[4853]: I1122 08:57:21.052602 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerStarted","Data":"3e76cd0eebb03e6c359abed3441330531e5bcbd20df38da3c2fef7732c8c2185"} Nov 22 08:57:21 crc kubenswrapper[4853]: I1122 08:57:21.215882 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:22 crc kubenswrapper[4853]: I1122 08:57:22.065647 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerStarted","Data":"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a"} Nov 22 08:57:22 crc kubenswrapper[4853]: I1122 08:57:22.068127 4853 generic.go:334] "Generic (PLEG): container finished" podID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerID="4c99215f3c7f83f1832be86e63a7d31caaacbbbcc8d98acbcd5bfbec70928096" exitCode=0 Nov 22 08:57:22 crc kubenswrapper[4853]: I1122 08:57:22.068175 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerDied","Data":"4c99215f3c7f83f1832be86e63a7d31caaacbbbcc8d98acbcd5bfbec70928096"} Nov 22 08:57:22 crc kubenswrapper[4853]: I1122 08:57:22.068199 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerStarted","Data":"a146e287b7e60507cbed62ce2220a77df9a216d1b61f4f57ff310f5612376db9"} Nov 22 08:57:23 crc kubenswrapper[4853]: I1122 08:57:23.083624 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerID="b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a" exitCode=0 Nov 22 08:57:23 crc kubenswrapper[4853]: I1122 08:57:23.084034 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerDied","Data":"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a"} Nov 22 08:57:24 crc kubenswrapper[4853]: I1122 08:57:24.098250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerStarted","Data":"66c9617b98fe44c1515a981e24750a64b3c298c8ea92137347ddf5de4651a1d2"} Nov 22 08:57:24 crc kubenswrapper[4853]: I1122 08:57:24.101496 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerStarted","Data":"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002"} Nov 22 08:57:24 crc kubenswrapper[4853]: I1122 08:57:24.140807 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z5k6s" podStartSLOduration=2.670269581 podStartE2EDuration="5.140786132s" podCreationTimestamp="2025-11-22 08:57:19 +0000 UTC" firstStartedPulling="2025-11-22 08:57:21.055667065 +0000 UTC m=+6439.896289691" lastFinishedPulling="2025-11-22 08:57:23.526183616 +0000 UTC m=+6442.366806242" observedRunningTime="2025-11-22 08:57:24.136552768 +0000 UTC m=+6442.977175404" watchObservedRunningTime="2025-11-22 08:57:24.140786132 +0000 UTC m=+6442.981408758" Nov 22 08:57:27 crc kubenswrapper[4853]: I1122 08:57:27.136466 4853 generic.go:334] "Generic (PLEG): container finished" podID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerID="66c9617b98fe44c1515a981e24750a64b3c298c8ea92137347ddf5de4651a1d2" exitCode=0 Nov 22 08:57:27 crc kubenswrapper[4853]: I1122 08:57:27.136538 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerDied","Data":"66c9617b98fe44c1515a981e24750a64b3c298c8ea92137347ddf5de4651a1d2"} Nov 22 08:57:28 crc kubenswrapper[4853]: I1122 08:57:28.150822 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerStarted","Data":"0765cd57afc5b2bf8d40c9303245df84cc280f5925734f09504cafbc2048b719"} Nov 22 08:57:28 crc kubenswrapper[4853]: I1122 08:57:28.186496 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qvlw" podStartSLOduration=2.725894145 podStartE2EDuration="8.186472692s" podCreationTimestamp="2025-11-22 08:57:20 +0000 UTC" firstStartedPulling="2025-11-22 08:57:22.069651856 +0000 UTC m=+6440.910274482" lastFinishedPulling="2025-11-22 08:57:27.530230403 +0000 UTC m=+6446.370853029" observedRunningTime="2025-11-22 08:57:28.176456721 +0000 UTC m=+6447.017079347" watchObservedRunningTime="2025-11-22 08:57:28.186472692 +0000 UTC m=+6447.027095328" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.073099 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.073785 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.127006 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.223522 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.685701 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:30 crc kubenswrapper[4853]: I1122 08:57:30.686528 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:31 crc kubenswrapper[4853]: I1122 08:57:31.297814 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:57:31 crc kubenswrapper[4853]: I1122 08:57:31.297889 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:57:31 crc kubenswrapper[4853]: I1122 08:57:31.734434 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5qvlw" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" probeResult="failure" output=< Nov 22 08:57:31 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:57:31 crc kubenswrapper[4853]: > Nov 22 08:57:32 crc kubenswrapper[4853]: I1122 08:57:32.325143 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:32 crc kubenswrapper[4853]: I1122 08:57:32.325442 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z5k6s" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="registry-server" containerID="cri-o://18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002" gracePeriod=2 Nov 22 08:57:32 crc kubenswrapper[4853]: I1122 08:57:32.855841 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.047555 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content\") pod \"8e586714-7e54-40d3-b951-f66b087ab4f3\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.047671 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities\") pod \"8e586714-7e54-40d3-b951-f66b087ab4f3\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.047975 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx4vr\" (UniqueName: \"kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr\") pod \"8e586714-7e54-40d3-b951-f66b087ab4f3\" (UID: \"8e586714-7e54-40d3-b951-f66b087ab4f3\") " Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.048549 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities" (OuterVolumeSpecName: "utilities") pod "8e586714-7e54-40d3-b951-f66b087ab4f3" (UID: "8e586714-7e54-40d3-b951-f66b087ab4f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.049055 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.056520 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr" (OuterVolumeSpecName: "kube-api-access-gx4vr") pod "8e586714-7e54-40d3-b951-f66b087ab4f3" (UID: "8e586714-7e54-40d3-b951-f66b087ab4f3"). InnerVolumeSpecName "kube-api-access-gx4vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.063856 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e586714-7e54-40d3-b951-f66b087ab4f3" (UID: "8e586714-7e54-40d3-b951-f66b087ab4f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.151462 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e586714-7e54-40d3-b951-f66b087ab4f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.151521 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx4vr\" (UniqueName: \"kubernetes.io/projected/8e586714-7e54-40d3-b951-f66b087ab4f3-kube-api-access-gx4vr\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.212078 4853 generic.go:334] "Generic (PLEG): container finished" podID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerID="18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002" exitCode=0 Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.212154 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z5k6s" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.212165 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerDied","Data":"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002"} Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.212267 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z5k6s" event={"ID":"8e586714-7e54-40d3-b951-f66b087ab4f3","Type":"ContainerDied","Data":"3e76cd0eebb03e6c359abed3441330531e5bcbd20df38da3c2fef7732c8c2185"} Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.212340 4853 scope.go:117] "RemoveContainer" containerID="18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.264060 4853 scope.go:117] "RemoveContainer" containerID="b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.270025 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.285969 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z5k6s"] Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.303198 4853 scope.go:117] "RemoveContainer" containerID="1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.354624 4853 scope.go:117] "RemoveContainer" containerID="18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002" Nov 22 08:57:33 crc kubenswrapper[4853]: E1122 08:57:33.355467 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002\": container with ID starting with 18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002 not found: ID does not exist" containerID="18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.355501 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002"} err="failed to get container status \"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002\": rpc error: code = NotFound desc = could not find container \"18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002\": container with ID starting with 18a21732aecaac519334462e7c7e2c649b0f94e53e3f871662a4717bd2623002 not found: ID does not exist" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.355526 4853 scope.go:117] "RemoveContainer" containerID="b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a" Nov 22 08:57:33 crc kubenswrapper[4853]: E1122 08:57:33.356008 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a\": container with ID starting with b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a not found: ID does not exist" containerID="b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.356054 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a"} err="failed to get container status \"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a\": rpc error: code = NotFound desc = could not find container \"b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a\": container with ID starting with b8d2a9d3a7affb115b89a08cae94d446ab70c0d8769e949c6cd8c005c566235a not found: ID does not exist" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.356082 4853 scope.go:117] "RemoveContainer" containerID="1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2" Nov 22 08:57:33 crc kubenswrapper[4853]: E1122 08:57:33.356550 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2\": container with ID starting with 1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2 not found: ID does not exist" containerID="1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.356608 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2"} err="failed to get container status \"1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2\": rpc error: code = NotFound desc = could not find container \"1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2\": container with ID starting with 1f412b53a75aa4db000d648b044c0b5f6c395ee17b25a293c49f48a80dfd88a2 not found: ID does not exist" Nov 22 08:57:33 crc kubenswrapper[4853]: I1122 08:57:33.763100 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" path="/var/lib/kubelet/pods/8e586714-7e54-40d3-b951-f66b087ab4f3/volumes" Nov 22 08:57:41 crc kubenswrapper[4853]: I1122 08:57:41.737901 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5qvlw" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" probeResult="failure" output=< Nov 22 08:57:41 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 08:57:41 crc kubenswrapper[4853]: > Nov 22 08:57:50 crc kubenswrapper[4853]: I1122 08:57:50.744344 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:50 crc kubenswrapper[4853]: I1122 08:57:50.800274 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:53 crc kubenswrapper[4853]: I1122 08:57:53.232614 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:53 crc kubenswrapper[4853]: I1122 08:57:53.233523 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5qvlw" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" containerID="cri-o://0765cd57afc5b2bf8d40c9303245df84cc280f5925734f09504cafbc2048b719" gracePeriod=2 Nov 22 08:57:53 crc kubenswrapper[4853]: I1122 08:57:53.458552 4853 generic.go:334] "Generic (PLEG): container finished" podID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerID="0765cd57afc5b2bf8d40c9303245df84cc280f5925734f09504cafbc2048b719" exitCode=0 Nov 22 08:57:53 crc kubenswrapper[4853]: I1122 08:57:53.458596 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerDied","Data":"0765cd57afc5b2bf8d40c9303245df84cc280f5925734f09504cafbc2048b719"} Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.111621 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.270033 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities\") pod \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.270117 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content\") pod \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.270542 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9txcw\" (UniqueName: \"kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw\") pod \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\" (UID: \"0e9099a6-ee86-4e04-b3a7-04574b5e5c10\") " Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.270906 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities" (OuterVolumeSpecName: "utilities") pod "0e9099a6-ee86-4e04-b3a7-04574b5e5c10" (UID: "0e9099a6-ee86-4e04-b3a7-04574b5e5c10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.272213 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.279495 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw" (OuterVolumeSpecName: "kube-api-access-9txcw") pod "0e9099a6-ee86-4e04-b3a7-04574b5e5c10" (UID: "0e9099a6-ee86-4e04-b3a7-04574b5e5c10"). InnerVolumeSpecName "kube-api-access-9txcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.346251 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e9099a6-ee86-4e04-b3a7-04574b5e5c10" (UID: "0e9099a6-ee86-4e04-b3a7-04574b5e5c10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.375354 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9txcw\" (UniqueName: \"kubernetes.io/projected/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-kube-api-access-9txcw\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.375395 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9099a6-ee86-4e04-b3a7-04574b5e5c10-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.471854 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qvlw" event={"ID":"0e9099a6-ee86-4e04-b3a7-04574b5e5c10","Type":"ContainerDied","Data":"a146e287b7e60507cbed62ce2220a77df9a216d1b61f4f57ff310f5612376db9"} Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.471923 4853 scope.go:117] "RemoveContainer" containerID="0765cd57afc5b2bf8d40c9303245df84cc280f5925734f09504cafbc2048b719" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.472112 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qvlw" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.505553 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.510056 4853 scope.go:117] "RemoveContainer" containerID="66c9617b98fe44c1515a981e24750a64b3c298c8ea92137347ddf5de4651a1d2" Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.516890 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5qvlw"] Nov 22 08:57:54 crc kubenswrapper[4853]: I1122 08:57:54.536221 4853 scope.go:117] "RemoveContainer" containerID="4c99215f3c7f83f1832be86e63a7d31caaacbbbcc8d98acbcd5bfbec70928096" Nov 22 08:57:55 crc kubenswrapper[4853]: I1122 08:57:55.760042 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" path="/var/lib/kubelet/pods/0e9099a6-ee86-4e04-b3a7-04574b5e5c10/volumes" Nov 22 08:58:01 crc kubenswrapper[4853]: I1122 08:58:01.297425 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:58:01 crc kubenswrapper[4853]: I1122 08:58:01.297989 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.297969 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.298850 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.298933 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.300488 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.300600 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00" gracePeriod=600 Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.936133 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00" exitCode=0 Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.936232 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00"} Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.936833 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4"} Nov 22 08:58:31 crc kubenswrapper[4853]: I1122 08:58:31.936869 4853 scope.go:117] "RemoveContainer" containerID="83a21d68415e9dbecbd7f6196cc5c89a7a751677adfbe72e130436d21e3a1f1a" Nov 22 08:58:47 crc kubenswrapper[4853]: I1122 08:58:47.482387 4853 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.211561791s: [/var/lib/containers/storage/overlay/7bc8d85700bb7356ded5c83f408d9d25a0ab074c827ea9b97cff45bdae2c51b7/diff /var/log/pods/openstack_openstackclient_fa95ca8f-6cef-4cbc-bd08-f693a09770dc/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.265316 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc"] Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.267708 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.267848 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.267978 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.268056 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.268133 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.268251 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.268348 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.268420 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.268506 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.268581 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4853]: E1122 09:00:00.268671 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.268746 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.269174 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e586714-7e54-40d3-b951-f66b087ab4f3" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.269299 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9099a6-ee86-4e04-b3a7-04574b5e5c10" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.270402 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.282004 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc"] Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.282377 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.282377 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.385884 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.387044 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxf9n\" (UniqueName: \"kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.387232 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.490105 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.490202 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxf9n\" (UniqueName: \"kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.490985 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.490321 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.503881 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.508331 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxf9n\" (UniqueName: \"kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n\") pod \"collect-profiles-29396700-4v5rc\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:00 crc kubenswrapper[4853]: I1122 09:00:00.605552 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:01 crc kubenswrapper[4853]: I1122 09:00:01.068551 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc"] Nov 22 09:00:02 crc kubenswrapper[4853]: I1122 09:00:02.003562 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" event={"ID":"d7be51e0-1894-4f0d-888a-d20e060b18a4","Type":"ContainerStarted","Data":"b537109d6edcd6a3fd829173a5a43dbec752c66bacd2ffea6d419894a1ee490e"} Nov 22 09:00:02 crc kubenswrapper[4853]: I1122 09:00:02.003938 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" event={"ID":"d7be51e0-1894-4f0d-888a-d20e060b18a4","Type":"ContainerStarted","Data":"9097fe59721f1f3203d3d555e1c5217eb01543e9464eef9fd1f406a450962915"} Nov 22 09:00:02 crc kubenswrapper[4853]: I1122 09:00:02.031220 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" podStartSLOduration=2.031200512 podStartE2EDuration="2.031200512s" podCreationTimestamp="2025-11-22 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:00:02.020360041 +0000 UTC m=+6600.860982687" watchObservedRunningTime="2025-11-22 09:00:02.031200512 +0000 UTC m=+6600.871823138" Nov 22 09:00:04 crc kubenswrapper[4853]: I1122 09:00:04.056098 4853 generic.go:334] "Generic (PLEG): container finished" podID="d7be51e0-1894-4f0d-888a-d20e060b18a4" containerID="b537109d6edcd6a3fd829173a5a43dbec752c66bacd2ffea6d419894a1ee490e" exitCode=0 Nov 22 09:00:04 crc kubenswrapper[4853]: I1122 09:00:04.056546 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" event={"ID":"d7be51e0-1894-4f0d-888a-d20e060b18a4","Type":"ContainerDied","Data":"b537109d6edcd6a3fd829173a5a43dbec752c66bacd2ffea6d419894a1ee490e"} Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:05.932248 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.079905 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" event={"ID":"d7be51e0-1894-4f0d-888a-d20e060b18a4","Type":"ContainerDied","Data":"9097fe59721f1f3203d3d555e1c5217eb01543e9464eef9fd1f406a450962915"} Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.079942 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9097fe59721f1f3203d3d555e1c5217eb01543e9464eef9fd1f406a450962915" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.079997 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-4v5rc" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.119950 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume\") pod \"d7be51e0-1894-4f0d-888a-d20e060b18a4\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.120298 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxf9n\" (UniqueName: \"kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n\") pod \"d7be51e0-1894-4f0d-888a-d20e060b18a4\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.120435 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume\") pod \"d7be51e0-1894-4f0d-888a-d20e060b18a4\" (UID: \"d7be51e0-1894-4f0d-888a-d20e060b18a4\") " Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.120876 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7be51e0-1894-4f0d-888a-d20e060b18a4" (UID: "d7be51e0-1894-4f0d-888a-d20e060b18a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.121294 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7be51e0-1894-4f0d-888a-d20e060b18a4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.127077 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7be51e0-1894-4f0d-888a-d20e060b18a4" (UID: "d7be51e0-1894-4f0d-888a-d20e060b18a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.127412 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n" (OuterVolumeSpecName: "kube-api-access-qxf9n") pod "d7be51e0-1894-4f0d-888a-d20e060b18a4" (UID: "d7be51e0-1894-4f0d-888a-d20e060b18a4"). InnerVolumeSpecName "kube-api-access-qxf9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.223212 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7be51e0-1894-4f0d-888a-d20e060b18a4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.223238 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxf9n\" (UniqueName: \"kubernetes.io/projected/d7be51e0-1894-4f0d-888a-d20e060b18a4-kube-api-access-qxf9n\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.410395 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx"] Nov 22 09:00:06 crc kubenswrapper[4853]: I1122 09:00:06.421783 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-5bddx"] Nov 22 09:00:07 crc kubenswrapper[4853]: I1122 09:00:07.762328 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b338e654-d135-4701-86a0-7d543b9fed30" path="/var/lib/kubelet/pods/b338e654-d135-4701-86a0-7d543b9fed30/volumes" Nov 22 09:00:08 crc kubenswrapper[4853]: I1122 09:00:08.562547 4853 scope.go:117] "RemoveContainer" containerID="3f2780b0f8a4b86b22ec0161a79ffb691be2e6637cd1ceeb6719bd311eb7f6a7" Nov 22 09:00:31 crc kubenswrapper[4853]: I1122 09:00:31.297698 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:00:31 crc kubenswrapper[4853]: I1122 09:00:31.298140 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.195332 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396701-4ldcl"] Nov 22 09:01:00 crc kubenswrapper[4853]: E1122 09:01:00.196561 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7be51e0-1894-4f0d-888a-d20e060b18a4" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.196583 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7be51e0-1894-4f0d-888a-d20e060b18a4" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.196959 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7be51e0-1894-4f0d-888a-d20e060b18a4" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.198024 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.222315 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396701-4ldcl"] Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.251278 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.251375 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lx6\" (UniqueName: \"kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.251510 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.251541 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.355038 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lx6\" (UniqueName: \"kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.355533 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.355617 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.355812 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.365237 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.365333 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.370877 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.379862 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lx6\" (UniqueName: \"kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6\") pod \"keystone-cron-29396701-4ldcl\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.519957 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:00 crc kubenswrapper[4853]: I1122 09:01:00.987391 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396701-4ldcl"] Nov 22 09:01:01 crc kubenswrapper[4853]: I1122 09:01:01.297883 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:01:01 crc kubenswrapper[4853]: I1122 09:01:01.298513 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:01:01 crc kubenswrapper[4853]: I1122 09:01:01.674483 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-4ldcl" event={"ID":"b7e0bbfc-0e09-4e3c-b337-df9e727db1db","Type":"ContainerStarted","Data":"ff3260e9e9bd316acc78f468b1fe56b069d8212cd15bed33e41b2e24d5b5451c"} Nov 22 09:01:01 crc kubenswrapper[4853]: I1122 09:01:01.674878 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-4ldcl" event={"ID":"b7e0bbfc-0e09-4e3c-b337-df9e727db1db","Type":"ContainerStarted","Data":"1195fffa078cf01bc12596bb7dc7b7219d8ec341c7d72e5199f4178ef7264de6"} Nov 22 09:01:01 crc kubenswrapper[4853]: I1122 09:01:01.706670 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396701-4ldcl" podStartSLOduration=1.706626073 podStartE2EDuration="1.706626073s" podCreationTimestamp="2025-11-22 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:01:01.695151684 +0000 UTC m=+6660.535774320" watchObservedRunningTime="2025-11-22 09:01:01.706626073 +0000 UTC m=+6660.547248709" Nov 22 09:01:07 crc kubenswrapper[4853]: I1122 09:01:07.738480 4853 generic.go:334] "Generic (PLEG): container finished" podID="b7e0bbfc-0e09-4e3c-b337-df9e727db1db" containerID="ff3260e9e9bd316acc78f468b1fe56b069d8212cd15bed33e41b2e24d5b5451c" exitCode=0 Nov 22 09:01:07 crc kubenswrapper[4853]: I1122 09:01:07.738584 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-4ldcl" event={"ID":"b7e0bbfc-0e09-4e3c-b337-df9e727db1db","Type":"ContainerDied","Data":"ff3260e9e9bd316acc78f468b1fe56b069d8212cd15bed33e41b2e24d5b5451c"} Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.144238 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.178365 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data\") pod \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.178499 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle\") pod \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.178848 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys\") pod \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.179036 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58lx6\" (UniqueName: \"kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6\") pod \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\" (UID: \"b7e0bbfc-0e09-4e3c-b337-df9e727db1db\") " Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.185355 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6" (OuterVolumeSpecName: "kube-api-access-58lx6") pod "b7e0bbfc-0e09-4e3c-b337-df9e727db1db" (UID: "b7e0bbfc-0e09-4e3c-b337-df9e727db1db"). InnerVolumeSpecName "kube-api-access-58lx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.185667 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b7e0bbfc-0e09-4e3c-b337-df9e727db1db" (UID: "b7e0bbfc-0e09-4e3c-b337-df9e727db1db"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.240232 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7e0bbfc-0e09-4e3c-b337-df9e727db1db" (UID: "b7e0bbfc-0e09-4e3c-b337-df9e727db1db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.271922 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data" (OuterVolumeSpecName: "config-data") pod "b7e0bbfc-0e09-4e3c-b337-df9e727db1db" (UID: "b7e0bbfc-0e09-4e3c-b337-df9e727db1db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.281242 4853 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.281274 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58lx6\" (UniqueName: \"kubernetes.io/projected/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-kube-api-access-58lx6\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.281285 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.281294 4853 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e0bbfc-0e09-4e3c-b337-df9e727db1db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.760832 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-4ldcl" Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.762517 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-4ldcl" event={"ID":"b7e0bbfc-0e09-4e3c-b337-df9e727db1db","Type":"ContainerDied","Data":"1195fffa078cf01bc12596bb7dc7b7219d8ec341c7d72e5199f4178ef7264de6"} Nov 22 09:01:09 crc kubenswrapper[4853]: I1122 09:01:09.762573 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1195fffa078cf01bc12596bb7dc7b7219d8ec341c7d72e5199f4178ef7264de6" Nov 22 09:01:31 crc kubenswrapper[4853]: I1122 09:01:31.297740 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:01:31 crc kubenswrapper[4853]: I1122 09:01:31.298314 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:01:31 crc kubenswrapper[4853]: I1122 09:01:31.298373 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 09:01:31 crc kubenswrapper[4853]: I1122 09:01:31.299383 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:01:31 crc kubenswrapper[4853]: I1122 09:01:31.299452 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" gracePeriod=600 Nov 22 09:01:31 crc kubenswrapper[4853]: E1122 09:01:31.947826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:01:32 crc kubenswrapper[4853]: I1122 09:01:32.007425 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" exitCode=0 Nov 22 09:01:32 crc kubenswrapper[4853]: I1122 09:01:32.007475 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4"} Nov 22 09:01:32 crc kubenswrapper[4853]: I1122 09:01:32.007520 4853 scope.go:117] "RemoveContainer" containerID="7477ef73a284fc33c9a8389e37eb4cd47cf6034a6c94ea7cfd506ed50e508c00" Nov 22 09:01:32 crc kubenswrapper[4853]: I1122 09:01:32.008497 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:01:32 crc kubenswrapper[4853]: E1122 09:01:32.008981 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:01:45 crc kubenswrapper[4853]: I1122 09:01:45.755960 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:01:45 crc kubenswrapper[4853]: E1122 09:01:45.757822 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:02:00 crc kubenswrapper[4853]: I1122 09:02:00.747498 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:02:00 crc kubenswrapper[4853]: E1122 09:02:00.748388 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:02:15 crc kubenswrapper[4853]: I1122 09:02:15.757249 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:02:15 crc kubenswrapper[4853]: E1122 09:02:15.759001 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:02:30 crc kubenswrapper[4853]: I1122 09:02:30.748305 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:02:30 crc kubenswrapper[4853]: E1122 09:02:30.749355 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:02:45 crc kubenswrapper[4853]: I1122 09:02:45.757401 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:02:45 crc kubenswrapper[4853]: E1122 09:02:45.758255 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:02:57 crc kubenswrapper[4853]: I1122 09:02:57.747787 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:02:57 crc kubenswrapper[4853]: E1122 09:02:57.748556 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:03:12 crc kubenswrapper[4853]: I1122 09:03:12.749073 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:03:12 crc kubenswrapper[4853]: E1122 09:03:12.750287 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:03:24 crc kubenswrapper[4853]: I1122 09:03:24.748230 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:03:24 crc kubenswrapper[4853]: E1122 09:03:24.749369 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:03:36 crc kubenswrapper[4853]: I1122 09:03:36.748458 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:03:36 crc kubenswrapper[4853]: E1122 09:03:36.749311 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:03:48 crc kubenswrapper[4853]: I1122 09:03:48.748277 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:03:48 crc kubenswrapper[4853]: E1122 09:03:48.749110 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.319593 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:03:57 crc kubenswrapper[4853]: E1122 09:03:57.320734 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e0bbfc-0e09-4e3c-b337-df9e727db1db" containerName="keystone-cron" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.320784 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e0bbfc-0e09-4e3c-b337-df9e727db1db" containerName="keystone-cron" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.321098 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e0bbfc-0e09-4e3c-b337-df9e727db1db" containerName="keystone-cron" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.322894 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.331202 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.380815 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk5r\" (UniqueName: \"kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.381024 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.381064 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.483213 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tk5r\" (UniqueName: \"kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.483377 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.483410 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.483996 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.484053 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.502892 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tk5r\" (UniqueName: \"kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r\") pod \"certified-operators-q77zn\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:57 crc kubenswrapper[4853]: I1122 09:03:57.642936 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:03:58 crc kubenswrapper[4853]: I1122 09:03:58.252563 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:03:58 crc kubenswrapper[4853]: I1122 09:03:58.615627 4853 generic.go:334] "Generic (PLEG): container finished" podID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerID="ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354" exitCode=0 Nov 22 09:03:58 crc kubenswrapper[4853]: I1122 09:03:58.615682 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerDied","Data":"ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354"} Nov 22 09:03:58 crc kubenswrapper[4853]: I1122 09:03:58.615711 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerStarted","Data":"f9d6c40d16dea58a7aa1855cbdac52d0603b626cbaa3f94f6edb33e1e30d066b"} Nov 22 09:03:58 crc kubenswrapper[4853]: I1122 09:03:58.617897 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:03:59 crc kubenswrapper[4853]: I1122 09:03:59.629025 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerStarted","Data":"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5"} Nov 22 09:04:00 crc kubenswrapper[4853]: I1122 09:04:00.749040 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:04:00 crc kubenswrapper[4853]: E1122 09:04:00.749517 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:04:01 crc kubenswrapper[4853]: I1122 09:04:01.656609 4853 generic.go:334] "Generic (PLEG): container finished" podID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerID="b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5" exitCode=0 Nov 22 09:04:01 crc kubenswrapper[4853]: I1122 09:04:01.656695 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerDied","Data":"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5"} Nov 22 09:04:02 crc kubenswrapper[4853]: I1122 09:04:02.670028 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerStarted","Data":"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411"} Nov 22 09:04:02 crc kubenswrapper[4853]: I1122 09:04:02.699440 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q77zn" podStartSLOduration=2.2387707040000002 podStartE2EDuration="5.699411793s" podCreationTimestamp="2025-11-22 09:03:57 +0000 UTC" firstStartedPulling="2025-11-22 09:03:58.617408016 +0000 UTC m=+6837.458030642" lastFinishedPulling="2025-11-22 09:04:02.078049105 +0000 UTC m=+6840.918671731" observedRunningTime="2025-11-22 09:04:02.688924301 +0000 UTC m=+6841.529546947" watchObservedRunningTime="2025-11-22 09:04:02.699411793 +0000 UTC m=+6841.540034439" Nov 22 09:04:07 crc kubenswrapper[4853]: I1122 09:04:07.644184 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:07 crc kubenswrapper[4853]: I1122 09:04:07.644840 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:08 crc kubenswrapper[4853]: I1122 09:04:08.698008 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-q77zn" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="registry-server" probeResult="failure" output=< Nov 22 09:04:08 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:04:08 crc kubenswrapper[4853]: > Nov 22 09:04:12 crc kubenswrapper[4853]: I1122 09:04:12.748100 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:04:12 crc kubenswrapper[4853]: E1122 09:04:12.748961 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:04:17 crc kubenswrapper[4853]: I1122 09:04:17.692984 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:17 crc kubenswrapper[4853]: I1122 09:04:17.747287 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:17 crc kubenswrapper[4853]: I1122 09:04:17.939369 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:04:18 crc kubenswrapper[4853]: I1122 09:04:18.850323 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q77zn" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="registry-server" containerID="cri-o://43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411" gracePeriod=2 Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.389644 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.449157 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content\") pod \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.449226 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities\") pod \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.449378 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tk5r\" (UniqueName: \"kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r\") pod \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\" (UID: \"1487ee14-a6e3-45e0-8d4d-91ca273ec943\") " Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.453270 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities" (OuterVolumeSpecName: "utilities") pod "1487ee14-a6e3-45e0-8d4d-91ca273ec943" (UID: "1487ee14-a6e3-45e0-8d4d-91ca273ec943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.461422 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r" (OuterVolumeSpecName: "kube-api-access-8tk5r") pod "1487ee14-a6e3-45e0-8d4d-91ca273ec943" (UID: "1487ee14-a6e3-45e0-8d4d-91ca273ec943"). InnerVolumeSpecName "kube-api-access-8tk5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.552467 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.552499 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tk5r\" (UniqueName: \"kubernetes.io/projected/1487ee14-a6e3-45e0-8d4d-91ca273ec943-kube-api-access-8tk5r\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.628972 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1487ee14-a6e3-45e0-8d4d-91ca273ec943" (UID: "1487ee14-a6e3-45e0-8d4d-91ca273ec943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.655320 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1487ee14-a6e3-45e0-8d4d-91ca273ec943-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.871662 4853 generic.go:334] "Generic (PLEG): container finished" podID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerID="43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411" exitCode=0 Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.871715 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q77zn" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.872268 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerDied","Data":"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411"} Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.872352 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q77zn" event={"ID":"1487ee14-a6e3-45e0-8d4d-91ca273ec943","Type":"ContainerDied","Data":"f9d6c40d16dea58a7aa1855cbdac52d0603b626cbaa3f94f6edb33e1e30d066b"} Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.872416 4853 scope.go:117] "RemoveContainer" containerID="43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.902152 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.911076 4853 scope.go:117] "RemoveContainer" containerID="b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5" Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.912777 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q77zn"] Nov 22 09:04:19 crc kubenswrapper[4853]: I1122 09:04:19.934113 4853 scope.go:117] "RemoveContainer" containerID="ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.004248 4853 scope.go:117] "RemoveContainer" containerID="43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411" Nov 22 09:04:20 crc kubenswrapper[4853]: E1122 09:04:20.005228 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411\": container with ID starting with 43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411 not found: ID does not exist" containerID="43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.005280 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411"} err="failed to get container status \"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411\": rpc error: code = NotFound desc = could not find container \"43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411\": container with ID starting with 43865ec7ee7beacfbb1a0f57f8dfdd6f9bcc1842faba42daa831e1133deb1411 not found: ID does not exist" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.005309 4853 scope.go:117] "RemoveContainer" containerID="b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5" Nov 22 09:04:20 crc kubenswrapper[4853]: E1122 09:04:20.005663 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5\": container with ID starting with b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5 not found: ID does not exist" containerID="b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.005721 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5"} err="failed to get container status \"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5\": rpc error: code = NotFound desc = could not find container \"b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5\": container with ID starting with b73cb761d45b9d122ac636f5937e21c8b7358cb4c6ead7c87db0c0ce8f4e4be5 not found: ID does not exist" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.005778 4853 scope.go:117] "RemoveContainer" containerID="ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354" Nov 22 09:04:20 crc kubenswrapper[4853]: E1122 09:04:20.006198 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354\": container with ID starting with ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354 not found: ID does not exist" containerID="ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354" Nov 22 09:04:20 crc kubenswrapper[4853]: I1122 09:04:20.006232 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354"} err="failed to get container status \"ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354\": rpc error: code = NotFound desc = could not find container \"ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354\": container with ID starting with ac0d8ec12a26d131ef9b8d69f4fad4b50347153a33f8d99cd2209c21b14e3354 not found: ID does not exist" Nov 22 09:04:21 crc kubenswrapper[4853]: I1122 09:04:21.760990 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" path="/var/lib/kubelet/pods/1487ee14-a6e3-45e0-8d4d-91ca273ec943/volumes" Nov 22 09:04:24 crc kubenswrapper[4853]: I1122 09:04:24.748576 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:04:24 crc kubenswrapper[4853]: E1122 09:04:24.749635 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:04:39 crc kubenswrapper[4853]: I1122 09:04:39.748634 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:04:39 crc kubenswrapper[4853]: E1122 09:04:39.749486 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:04:54 crc kubenswrapper[4853]: I1122 09:04:54.748448 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:04:54 crc kubenswrapper[4853]: E1122 09:04:54.749266 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:05:08 crc kubenswrapper[4853]: I1122 09:05:08.748975 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:05:08 crc kubenswrapper[4853]: E1122 09:05:08.749971 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:05:22 crc kubenswrapper[4853]: I1122 09:05:22.748676 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:05:22 crc kubenswrapper[4853]: E1122 09:05:22.749682 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:05:33 crc kubenswrapper[4853]: I1122 09:05:33.747816 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:05:33 crc kubenswrapper[4853]: E1122 09:05:33.748564 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:05:46 crc kubenswrapper[4853]: I1122 09:05:46.748236 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:05:46 crc kubenswrapper[4853]: E1122 09:05:46.749377 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:06:01 crc kubenswrapper[4853]: I1122 09:06:01.747855 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:06:01 crc kubenswrapper[4853]: E1122 09:06:01.748796 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:06:16 crc kubenswrapper[4853]: I1122 09:06:16.749405 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:06:16 crc kubenswrapper[4853]: E1122 09:06:16.750311 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:06:28 crc kubenswrapper[4853]: I1122 09:06:28.748644 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:06:28 crc kubenswrapper[4853]: E1122 09:06:28.749819 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:06:40 crc kubenswrapper[4853]: I1122 09:06:40.748538 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:06:41 crc kubenswrapper[4853]: I1122 09:06:41.484561 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957"} Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.720965 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:07:17 crc kubenswrapper[4853]: E1122 09:07:17.722203 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="extract-utilities" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.722224 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="extract-utilities" Nov 22 09:07:17 crc kubenswrapper[4853]: E1122 09:07:17.722248 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="extract-content" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.722256 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="extract-content" Nov 22 09:07:17 crc kubenswrapper[4853]: E1122 09:07:17.722272 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="registry-server" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.722280 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="registry-server" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.722614 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1487ee14-a6e3-45e0-8d4d-91ca273ec943" containerName="registry-server" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.726643 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.733431 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.889841 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.890726 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.890983 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4tl\" (UniqueName: \"kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.994906 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.994972 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.995103 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4tl\" (UniqueName: \"kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.995476 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:17 crc kubenswrapper[4853]: I1122 09:07:17.995499 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:18 crc kubenswrapper[4853]: I1122 09:07:18.030895 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4tl\" (UniqueName: \"kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl\") pod \"redhat-operators-lpbwr\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:18 crc kubenswrapper[4853]: I1122 09:07:18.048790 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:18 crc kubenswrapper[4853]: I1122 09:07:18.577926 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:07:18 crc kubenswrapper[4853]: I1122 09:07:18.889723 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerStarted","Data":"c48b3a4d37869e97d96977f44939ead5160adab4e24d341c449ae4ad6b3d9457"} Nov 22 09:07:18 crc kubenswrapper[4853]: I1122 09:07:18.890270 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerStarted","Data":"780a22c2f96822f3eeab499f7625dd0037ca4257e22cf967329a02c4977a8b62"} Nov 22 09:07:19 crc kubenswrapper[4853]: I1122 09:07:19.904812 4853 generic.go:334] "Generic (PLEG): container finished" podID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerID="c48b3a4d37869e97d96977f44939ead5160adab4e24d341c449ae4ad6b3d9457" exitCode=0 Nov 22 09:07:19 crc kubenswrapper[4853]: I1122 09:07:19.904882 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerDied","Data":"c48b3a4d37869e97d96977f44939ead5160adab4e24d341c449ae4ad6b3d9457"} Nov 22 09:07:21 crc kubenswrapper[4853]: I1122 09:07:21.932222 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerStarted","Data":"ab8888d08688c17d787ddaa6154ff81621e44813f3367bd480331f8c55f97ba5"} Nov 22 09:07:32 crc kubenswrapper[4853]: I1122 09:07:32.047352 4853 generic.go:334] "Generic (PLEG): container finished" podID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerID="ab8888d08688c17d787ddaa6154ff81621e44813f3367bd480331f8c55f97ba5" exitCode=0 Nov 22 09:07:32 crc kubenswrapper[4853]: I1122 09:07:32.047663 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerDied","Data":"ab8888d08688c17d787ddaa6154ff81621e44813f3367bd480331f8c55f97ba5"} Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.074394 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerStarted","Data":"c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc"} Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.108319 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lpbwr" podStartSLOduration=3.859473455 podStartE2EDuration="17.108260107s" podCreationTimestamp="2025-11-22 09:07:17 +0000 UTC" firstStartedPulling="2025-11-22 09:07:19.907936203 +0000 UTC m=+7038.748558829" lastFinishedPulling="2025-11-22 09:07:33.156722855 +0000 UTC m=+7051.997345481" observedRunningTime="2025-11-22 09:07:34.096894032 +0000 UTC m=+7052.937516658" watchObservedRunningTime="2025-11-22 09:07:34.108260107 +0000 UTC m=+7052.948882733" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.498703 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.501292 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.514515 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.600034 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ndhw\" (UniqueName: \"kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.600347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.600627 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.702635 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.702835 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.702910 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ndhw\" (UniqueName: \"kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.703410 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.703701 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.772873 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ndhw\" (UniqueName: \"kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw\") pod \"redhat-marketplace-r2bx5\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:34 crc kubenswrapper[4853]: I1122 09:07:34.821474 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:35 crc kubenswrapper[4853]: I1122 09:07:35.861475 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:36 crc kubenswrapper[4853]: I1122 09:07:36.096213 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerStarted","Data":"989b3b8882dd29ddda03f8339132e1b8dfbdbd7048fba937f6cbc7e7a10626eb"} Nov 22 09:07:37 crc kubenswrapper[4853]: I1122 09:07:37.108888 4853 generic.go:334] "Generic (PLEG): container finished" podID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerID="c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab" exitCode=0 Nov 22 09:07:37 crc kubenswrapper[4853]: I1122 09:07:37.108956 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerDied","Data":"c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab"} Nov 22 09:07:38 crc kubenswrapper[4853]: I1122 09:07:38.049970 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:38 crc kubenswrapper[4853]: I1122 09:07:38.050314 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:07:39 crc kubenswrapper[4853]: I1122 09:07:39.110524 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:07:39 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:07:39 crc kubenswrapper[4853]: > Nov 22 09:07:39 crc kubenswrapper[4853]: I1122 09:07:39.132615 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerStarted","Data":"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff"} Nov 22 09:07:41 crc kubenswrapper[4853]: I1122 09:07:41.153316 4853 generic.go:334] "Generic (PLEG): container finished" podID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerID="baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff" exitCode=0 Nov 22 09:07:41 crc kubenswrapper[4853]: I1122 09:07:41.153505 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerDied","Data":"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff"} Nov 22 09:07:42 crc kubenswrapper[4853]: I1122 09:07:42.167307 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerStarted","Data":"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7"} Nov 22 09:07:42 crc kubenswrapper[4853]: I1122 09:07:42.193297 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2bx5" podStartSLOduration=3.72015695 podStartE2EDuration="8.193274384s" podCreationTimestamp="2025-11-22 09:07:34 +0000 UTC" firstStartedPulling="2025-11-22 09:07:37.110585121 +0000 UTC m=+7055.951207747" lastFinishedPulling="2025-11-22 09:07:41.583702555 +0000 UTC m=+7060.424325181" observedRunningTime="2025-11-22 09:07:42.185102044 +0000 UTC m=+7061.025724680" watchObservedRunningTime="2025-11-22 09:07:42.193274384 +0000 UTC m=+7061.033897020" Nov 22 09:07:44 crc kubenswrapper[4853]: I1122 09:07:44.822897 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:44 crc kubenswrapper[4853]: I1122 09:07:44.823541 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:45 crc kubenswrapper[4853]: I1122 09:07:45.880153 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-r2bx5" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="registry-server" probeResult="failure" output=< Nov 22 09:07:45 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:07:45 crc kubenswrapper[4853]: > Nov 22 09:07:49 crc kubenswrapper[4853]: I1122 09:07:49.106359 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:07:49 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:07:49 crc kubenswrapper[4853]: > Nov 22 09:07:54 crc kubenswrapper[4853]: I1122 09:07:54.869167 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:54 crc kubenswrapper[4853]: I1122 09:07:54.926977 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:55 crc kubenswrapper[4853]: I1122 09:07:55.109028 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.308477 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2bx5" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="registry-server" containerID="cri-o://b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7" gracePeriod=2 Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.886129 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.951904 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content\") pod \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.952128 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ndhw\" (UniqueName: \"kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw\") pod \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.952411 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities\") pod \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\" (UID: \"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033\") " Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.953028 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities" (OuterVolumeSpecName: "utilities") pod "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" (UID: "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.953566 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.960488 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw" (OuterVolumeSpecName: "kube-api-access-9ndhw") pod "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" (UID: "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033"). InnerVolumeSpecName "kube-api-access-9ndhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:07:56 crc kubenswrapper[4853]: I1122 09:07:56.972731 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" (UID: "6ab098d0-da7a-4cd4-82c8-f3f6c7e81033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.056262 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.056308 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ndhw\" (UniqueName: \"kubernetes.io/projected/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033-kube-api-access-9ndhw\") on node \"crc\" DevicePath \"\"" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.321641 4853 generic.go:334] "Generic (PLEG): container finished" podID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerID="b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7" exitCode=0 Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.321692 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bx5" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.321693 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerDied","Data":"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7"} Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.322148 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bx5" event={"ID":"6ab098d0-da7a-4cd4-82c8-f3f6c7e81033","Type":"ContainerDied","Data":"989b3b8882dd29ddda03f8339132e1b8dfbdbd7048fba937f6cbc7e7a10626eb"} Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.322188 4853 scope.go:117] "RemoveContainer" containerID="b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.358032 4853 scope.go:117] "RemoveContainer" containerID="baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.368922 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.382672 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bx5"] Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.398908 4853 scope.go:117] "RemoveContainer" containerID="c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.453492 4853 scope.go:117] "RemoveContainer" containerID="b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7" Nov 22 09:07:57 crc kubenswrapper[4853]: E1122 09:07:57.454222 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7\": container with ID starting with b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7 not found: ID does not exist" containerID="b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.454361 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7"} err="failed to get container status \"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7\": rpc error: code = NotFound desc = could not find container \"b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7\": container with ID starting with b0925d68c91582877ba2504c18e40d8b5119d5a1933b6acdde9ede33db7652e7 not found: ID does not exist" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.454462 4853 scope.go:117] "RemoveContainer" containerID="baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff" Nov 22 09:07:57 crc kubenswrapper[4853]: E1122 09:07:57.455088 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff\": container with ID starting with baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff not found: ID does not exist" containerID="baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.455138 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff"} err="failed to get container status \"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff\": rpc error: code = NotFound desc = could not find container \"baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff\": container with ID starting with baf7f274b8e963d123b4f872e3148ce894997363936a4476c313954156ba3dff not found: ID does not exist" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.455167 4853 scope.go:117] "RemoveContainer" containerID="c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab" Nov 22 09:07:57 crc kubenswrapper[4853]: E1122 09:07:57.455792 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab\": container with ID starting with c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab not found: ID does not exist" containerID="c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.455816 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab"} err="failed to get container status \"c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab\": rpc error: code = NotFound desc = could not find container \"c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab\": container with ID starting with c3eb1e749059ab0ebcd97a22735b50ebc63f1e3655c44d20a0885b6a23cb6cab not found: ID does not exist" Nov 22 09:07:57 crc kubenswrapper[4853]: I1122 09:07:57.764587 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" path="/var/lib/kubelet/pods/6ab098d0-da7a-4cd4-82c8-f3f6c7e81033/volumes" Nov 22 09:07:59 crc kubenswrapper[4853]: I1122 09:07:59.106659 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:07:59 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:07:59 crc kubenswrapper[4853]: > Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.708452 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:08:04 crc kubenswrapper[4853]: E1122 09:08:04.709481 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="extract-content" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.709496 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="extract-content" Nov 22 09:08:04 crc kubenswrapper[4853]: E1122 09:08:04.709508 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="registry-server" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.709514 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="registry-server" Nov 22 09:08:04 crc kubenswrapper[4853]: E1122 09:08:04.709564 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="extract-utilities" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.709572 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="extract-utilities" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.709849 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab098d0-da7a-4cd4-82c8-f3f6c7e81033" containerName="registry-server" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.711686 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.724694 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.742119 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.742390 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.742544 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxgf\" (UniqueName: \"kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.845092 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxgf\" (UniqueName: \"kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.845176 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.845332 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.845898 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.846304 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:04 crc kubenswrapper[4853]: I1122 09:08:04.871214 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxgf\" (UniqueName: \"kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf\") pod \"community-operators-z4wqj\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:05 crc kubenswrapper[4853]: I1122 09:08:05.030879 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:05 crc kubenswrapper[4853]: I1122 09:08:05.615951 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:08:06 crc kubenswrapper[4853]: I1122 09:08:06.433173 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerStarted","Data":"9ce07059dbd7e2d4898cf6c1ce06fcda96f8e284fc97932513b551c6585609a1"} Nov 22 09:08:07 crc kubenswrapper[4853]: I1122 09:08:07.451547 4853 generic.go:334] "Generic (PLEG): container finished" podID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerID="abbfa284d3aede1caa006e706e4f5c28d68640a94f7360c2344e7259b453c67f" exitCode=0 Nov 22 09:08:07 crc kubenswrapper[4853]: I1122 09:08:07.451605 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerDied","Data":"abbfa284d3aede1caa006e706e4f5c28d68640a94f7360c2344e7259b453c67f"} Nov 22 09:08:09 crc kubenswrapper[4853]: I1122 09:08:09.261170 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:09 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:09 crc kubenswrapper[4853]: > Nov 22 09:08:09 crc kubenswrapper[4853]: I1122 09:08:09.474200 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerStarted","Data":"8d3d87c318ac8a92b20a535421756fcbbf6ef6573fddeb3ee59389f3d78aba18"} Nov 22 09:08:15 crc kubenswrapper[4853]: I1122 09:08:15.554216 4853 generic.go:334] "Generic (PLEG): container finished" podID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerID="8d3d87c318ac8a92b20a535421756fcbbf6ef6573fddeb3ee59389f3d78aba18" exitCode=0 Nov 22 09:08:15 crc kubenswrapper[4853]: I1122 09:08:15.554686 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerDied","Data":"8d3d87c318ac8a92b20a535421756fcbbf6ef6573fddeb3ee59389f3d78aba18"} Nov 22 09:08:17 crc kubenswrapper[4853]: I1122 09:08:17.584016 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerStarted","Data":"3331785d1b8bec6533372d07df7dadc2298b3fb2c39bc40ac0c1fff370c0a314"} Nov 22 09:08:17 crc kubenswrapper[4853]: I1122 09:08:17.612087 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z4wqj" podStartSLOduration=4.388790325 podStartE2EDuration="13.61207001s" podCreationTimestamp="2025-11-22 09:08:04 +0000 UTC" firstStartedPulling="2025-11-22 09:08:07.455683628 +0000 UTC m=+7086.296306254" lastFinishedPulling="2025-11-22 09:08:16.678963313 +0000 UTC m=+7095.519585939" observedRunningTime="2025-11-22 09:08:17.600371495 +0000 UTC m=+7096.440994141" watchObservedRunningTime="2025-11-22 09:08:17.61207001 +0000 UTC m=+7096.452692636" Nov 22 09:08:19 crc kubenswrapper[4853]: I1122 09:08:19.106197 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:19 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:19 crc kubenswrapper[4853]: > Nov 22 09:08:25 crc kubenswrapper[4853]: I1122 09:08:25.031576 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:25 crc kubenswrapper[4853]: I1122 09:08:25.032166 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:08:26 crc kubenswrapper[4853]: I1122 09:08:26.081633 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z4wqj" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:26 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:26 crc kubenswrapper[4853]: > Nov 22 09:08:29 crc kubenswrapper[4853]: I1122 09:08:29.106438 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:29 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:29 crc kubenswrapper[4853]: > Nov 22 09:08:36 crc kubenswrapper[4853]: I1122 09:08:36.084926 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z4wqj" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:36 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:36 crc kubenswrapper[4853]: > Nov 22 09:08:39 crc kubenswrapper[4853]: I1122 09:08:39.096936 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:39 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:39 crc kubenswrapper[4853]: > Nov 22 09:08:46 crc kubenswrapper[4853]: I1122 09:08:46.089091 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z4wqj" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:46 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:46 crc kubenswrapper[4853]: > Nov 22 09:08:49 crc kubenswrapper[4853]: I1122 09:08:49.094874 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:49 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:49 crc kubenswrapper[4853]: > Nov 22 09:08:56 crc kubenswrapper[4853]: I1122 09:08:56.086486 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z4wqj" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:56 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:56 crc kubenswrapper[4853]: > Nov 22 09:08:59 crc kubenswrapper[4853]: I1122 09:08:59.101026 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:08:59 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:08:59 crc kubenswrapper[4853]: > Nov 22 09:09:01 crc kubenswrapper[4853]: I1122 09:09:01.297686 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:09:01 crc kubenswrapper[4853]: I1122 09:09:01.298005 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:09:05 crc kubenswrapper[4853]: I1122 09:09:05.082906 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:09:05 crc kubenswrapper[4853]: I1122 09:09:05.156146 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:09:05 crc kubenswrapper[4853]: I1122 09:09:05.928936 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:09:06 crc kubenswrapper[4853]: I1122 09:09:06.204892 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z4wqj" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" containerID="cri-o://3331785d1b8bec6533372d07df7dadc2298b3fb2c39bc40ac0c1fff370c0a314" gracePeriod=2 Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.219088 4853 generic.go:334] "Generic (PLEG): container finished" podID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerID="3331785d1b8bec6533372d07df7dadc2298b3fb2c39bc40ac0c1fff370c0a314" exitCode=0 Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.219128 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerDied","Data":"3331785d1b8bec6533372d07df7dadc2298b3fb2c39bc40ac0c1fff370c0a314"} Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.516208 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.630377 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content\") pod \"14c5a26e-16f8-403e-8b68-76f9937f5482\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.630477 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities\") pod \"14c5a26e-16f8-403e-8b68-76f9937f5482\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.630538 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxgf\" (UniqueName: \"kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf\") pod \"14c5a26e-16f8-403e-8b68-76f9937f5482\" (UID: \"14c5a26e-16f8-403e-8b68-76f9937f5482\") " Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.631361 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities" (OuterVolumeSpecName: "utilities") pod "14c5a26e-16f8-403e-8b68-76f9937f5482" (UID: "14c5a26e-16f8-403e-8b68-76f9937f5482"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.703026 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf" (OuterVolumeSpecName: "kube-api-access-sjxgf") pod "14c5a26e-16f8-403e-8b68-76f9937f5482" (UID: "14c5a26e-16f8-403e-8b68-76f9937f5482"). InnerVolumeSpecName "kube-api-access-sjxgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.733183 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.733472 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxgf\" (UniqueName: \"kubernetes.io/projected/14c5a26e-16f8-403e-8b68-76f9937f5482-kube-api-access-sjxgf\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.817716 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c5a26e-16f8-403e-8b68-76f9937f5482" (UID: "14c5a26e-16f8-403e-8b68-76f9937f5482"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:07 crc kubenswrapper[4853]: I1122 09:09:07.836359 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c5a26e-16f8-403e-8b68-76f9937f5482-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.230464 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z4wqj" event={"ID":"14c5a26e-16f8-403e-8b68-76f9937f5482","Type":"ContainerDied","Data":"9ce07059dbd7e2d4898cf6c1ce06fcda96f8e284fc97932513b551c6585609a1"} Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.230529 4853 scope.go:117] "RemoveContainer" containerID="3331785d1b8bec6533372d07df7dadc2298b3fb2c39bc40ac0c1fff370c0a314" Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.231999 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z4wqj" Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.272243 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.283654 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z4wqj"] Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.296707 4853 scope.go:117] "RemoveContainer" containerID="8d3d87c318ac8a92b20a535421756fcbbf6ef6573fddeb3ee59389f3d78aba18" Nov 22 09:09:08 crc kubenswrapper[4853]: I1122 09:09:08.329065 4853 scope.go:117] "RemoveContainer" containerID="abbfa284d3aede1caa006e706e4f5c28d68640a94f7360c2344e7259b453c67f" Nov 22 09:09:09 crc kubenswrapper[4853]: I1122 09:09:09.095917 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:09:09 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:09:09 crc kubenswrapper[4853]: > Nov 22 09:09:09 crc kubenswrapper[4853]: I1122 09:09:09.096208 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:09 crc kubenswrapper[4853]: I1122 09:09:09.098126 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc"} pod="openshift-marketplace/redhat-operators-lpbwr" containerMessage="Container registry-server failed startup probe, will be restarted" Nov 22 09:09:09 crc kubenswrapper[4853]: I1122 09:09:09.101101 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" containerID="cri-o://c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc" gracePeriod=30 Nov 22 09:09:09 crc kubenswrapper[4853]: I1122 09:09:09.764448 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" path="/var/lib/kubelet/pods/14c5a26e-16f8-403e-8b68-76f9937f5482/volumes" Nov 22 09:09:12 crc kubenswrapper[4853]: I1122 09:09:12.915419 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:09:13 crc kubenswrapper[4853]: I1122 09:09:13.284890 4853 generic.go:334] "Generic (PLEG): container finished" podID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerID="c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc" exitCode=0 Nov 22 09:09:13 crc kubenswrapper[4853]: I1122 09:09:13.284930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerDied","Data":"c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc"} Nov 22 09:09:15 crc kubenswrapper[4853]: I1122 09:09:15.310995 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerStarted","Data":"550e39fdbc44a692eb62a9d94e3f96c605a8f206519e1d948e8043b6585c600b"} Nov 22 09:09:18 crc kubenswrapper[4853]: I1122 09:09:18.050312 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:18 crc kubenswrapper[4853]: I1122 09:09:18.050795 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:19 crc kubenswrapper[4853]: I1122 09:09:19.123774 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:09:19 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:09:19 crc kubenswrapper[4853]: > Nov 22 09:09:28 crc kubenswrapper[4853]: I1122 09:09:28.103960 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:28 crc kubenswrapper[4853]: I1122 09:09:28.156902 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:28 crc kubenswrapper[4853]: I1122 09:09:28.347565 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:09:29 crc kubenswrapper[4853]: I1122 09:09:29.466532 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lpbwr" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" containerID="cri-o://550e39fdbc44a692eb62a9d94e3f96c605a8f206519e1d948e8043b6585c600b" gracePeriod=2 Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.482018 4853 generic.go:334] "Generic (PLEG): container finished" podID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerID="550e39fdbc44a692eb62a9d94e3f96c605a8f206519e1d948e8043b6585c600b" exitCode=0 Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.482104 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerDied","Data":"550e39fdbc44a692eb62a9d94e3f96c605a8f206519e1d948e8043b6585c600b"} Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.482458 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lpbwr" event={"ID":"c416f397-780e-4c64-8e84-0745d1d6ec4c","Type":"ContainerDied","Data":"780a22c2f96822f3eeab499f7625dd0037ca4257e22cf967329a02c4977a8b62"} Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.482482 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780a22c2f96822f3eeab499f7625dd0037ca4257e22cf967329a02c4977a8b62" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.482502 4853 scope.go:117] "RemoveContainer" containerID="c9d52530e66f83d235e347229d6b997604efba916d02888fda138ca9af4886dc" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.542053 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.620902 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp4tl\" (UniqueName: \"kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl\") pod \"c416f397-780e-4c64-8e84-0745d1d6ec4c\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.621160 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content\") pod \"c416f397-780e-4c64-8e84-0745d1d6ec4c\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.621288 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities\") pod \"c416f397-780e-4c64-8e84-0745d1d6ec4c\" (UID: \"c416f397-780e-4c64-8e84-0745d1d6ec4c\") " Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.621690 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities" (OuterVolumeSpecName: "utilities") pod "c416f397-780e-4c64-8e84-0745d1d6ec4c" (UID: "c416f397-780e-4c64-8e84-0745d1d6ec4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.622336 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.628225 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl" (OuterVolumeSpecName: "kube-api-access-rp4tl") pod "c416f397-780e-4c64-8e84-0745d1d6ec4c" (UID: "c416f397-780e-4c64-8e84-0745d1d6ec4c"). InnerVolumeSpecName "kube-api-access-rp4tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.724254 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp4tl\" (UniqueName: \"kubernetes.io/projected/c416f397-780e-4c64-8e84-0745d1d6ec4c-kube-api-access-rp4tl\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.729235 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c416f397-780e-4c64-8e84-0745d1d6ec4c" (UID: "c416f397-780e-4c64-8e84-0745d1d6ec4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:30 crc kubenswrapper[4853]: I1122 09:09:30.826449 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c416f397-780e-4c64-8e84-0745d1d6ec4c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.297268 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.297594 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.499446 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lpbwr" Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.544727 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.554692 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lpbwr"] Nov 22 09:09:31 crc kubenswrapper[4853]: I1122 09:09:31.766457 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" path="/var/lib/kubelet/pods/c416f397-780e-4c64-8e84-0745d1d6ec4c/volumes" Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.297573 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.298285 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.298338 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.299350 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.299418 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957" gracePeriod=600 Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.834677 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957" exitCode=0 Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.834720 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957"} Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.835049 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c"} Nov 22 09:10:01 crc kubenswrapper[4853]: I1122 09:10:01.835072 4853 scope.go:117] "RemoveContainer" containerID="bd95507de2f2b5abe45ab28fc6bce2e85c9617c68ccece2fdf55102971a72bd4" Nov 22 09:10:05 crc kubenswrapper[4853]: I1122 09:10:05.884407 4853 generic.go:334] "Generic (PLEG): container finished" podID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" containerID="3f318f57cf513d5d41a7316de76dd2a62ca00827f327fe62a139e5ac93545688" exitCode=0 Nov 22 09:10:05 crc kubenswrapper[4853]: I1122 09:10:05.884527 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1f255ef5-a59e-42c4-9ac7-ff33562499f6","Type":"ContainerDied","Data":"3f318f57cf513d5d41a7316de76dd2a62ca00827f327fe62a139e5ac93545688"} Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.318029 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435044 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435643 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435718 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435802 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435922 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.435987 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrb84\" (UniqueName: \"kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.436018 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.436104 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.436132 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key\") pod \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\" (UID: \"1f255ef5-a59e-42c4-9ac7-ff33562499f6\") " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.437023 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.438932 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data" (OuterVolumeSpecName: "config-data") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.443436 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.446330 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84" (OuterVolumeSpecName: "kube-api-access-vrb84") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "kube-api-access-vrb84". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.446476 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.474680 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.474760 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.476925 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.505438 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1f255ef5-a59e-42c4-9ac7-ff33562499f6" (UID: "1f255ef5-a59e-42c4-9ac7-ff33562499f6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.539359 4853 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.539410 4853 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.539423 4853 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1f255ef5-a59e-42c4-9ac7-ff33562499f6-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541002 4853 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541027 4853 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541040 4853 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541052 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541066 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrb84\" (UniqueName: \"kubernetes.io/projected/1f255ef5-a59e-42c4-9ac7-ff33562499f6-kube-api-access-vrb84\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.541077 4853 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1f255ef5-a59e-42c4-9ac7-ff33562499f6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.578325 4853 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.642992 4853 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.910119 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"1f255ef5-a59e-42c4-9ac7-ff33562499f6","Type":"ContainerDied","Data":"72d5d2d14e5f6004a4a0ed9f5350e810594a75ec1399c480ef72a06baf7d1e42"} Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.910160 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72d5d2d14e5f6004a4a0ed9f5350e810594a75ec1399c480ef72a06baf7d1e42" Nov 22 09:10:07 crc kubenswrapper[4853]: I1122 09:10:07.910190 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.062167 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063169 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063186 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063207 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="extract-utilities" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063216 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="extract-utilities" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063247 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063255 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063267 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="extract-content" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063275 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="extract-content" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063297 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="extract-content" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063304 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="extract-content" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063325 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063332 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063361 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" containerName="tempest-tests-tempest-tests-runner" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063369 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" containerName="tempest-tests-tempest-tests-runner" Nov 22 09:10:15 crc kubenswrapper[4853]: E1122 09:10:15.063403 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="extract-utilities" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063412 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="extract-utilities" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063713 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f255ef5-a59e-42c4-9ac7-ff33562499f6" containerName="tempest-tests-tempest-tests-runner" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063738 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c5a26e-16f8-403e-8b68-76f9937f5482" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.063781 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.064788 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.066596 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-24rpp" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.073988 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.122661 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smpkb\" (UniqueName: \"kubernetes.io/projected/e203f149-c6fd-489f-b75a-d1dcded1fbdb-kube-api-access-smpkb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.122867 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.225030 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smpkb\" (UniqueName: \"kubernetes.io/projected/e203f149-c6fd-489f-b75a-d1dcded1fbdb-kube-api-access-smpkb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.225181 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.227493 4853 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.249685 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smpkb\" (UniqueName: \"kubernetes.io/projected/e203f149-c6fd-489f-b75a-d1dcded1fbdb-kube-api-access-smpkb\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.266871 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e203f149-c6fd-489f-b75a-d1dcded1fbdb\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.383860 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.882528 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 09:10:15 crc kubenswrapper[4853]: I1122 09:10:15.997990 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e203f149-c6fd-489f-b75a-d1dcded1fbdb","Type":"ContainerStarted","Data":"9aec13f80b6e326267cdcd3f3221140a7bb4038a9350f38ea614a0c2a2cbd43a"} Nov 22 09:10:19 crc kubenswrapper[4853]: I1122 09:10:19.050183 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e203f149-c6fd-489f-b75a-d1dcded1fbdb","Type":"ContainerStarted","Data":"2dc81e053642feafc40391ee49f44c251bc6d94e5a3819351e5657f56d28560d"} Nov 22 09:10:19 crc kubenswrapper[4853]: I1122 09:10:19.076705 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.446650889 podStartE2EDuration="4.076683052s" podCreationTimestamp="2025-11-22 09:10:15 +0000 UTC" firstStartedPulling="2025-11-22 09:10:15.910963609 +0000 UTC m=+7214.751586235" lastFinishedPulling="2025-11-22 09:10:18.540995742 +0000 UTC m=+7217.381618398" observedRunningTime="2025-11-22 09:10:19.074905353 +0000 UTC m=+7217.915527979" watchObservedRunningTime="2025-11-22 09:10:19.076683052 +0000 UTC m=+7217.917305678" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.022787 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nq2xj/must-gather-vff7c"] Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.024109 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="c416f397-780e-4c64-8e84-0745d1d6ec4c" containerName="registry-server" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.025660 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.028587 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nq2xj"/"kube-root-ca.crt" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.029176 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nq2xj"/"default-dockercfg-52v7m" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.035165 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nq2xj"/"openshift-service-ca.crt" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.035992 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nq2xj/must-gather-vff7c"] Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.158563 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqmv\" (UniqueName: \"kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.158967 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.261468 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqmv\" (UniqueName: \"kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.261622 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.262137 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.283937 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqmv\" (UniqueName: \"kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv\") pod \"must-gather-vff7c\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.355880 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:11:07 crc kubenswrapper[4853]: I1122 09:11:07.846443 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nq2xj/must-gather-vff7c"] Nov 22 09:11:07 crc kubenswrapper[4853]: W1122 09:11:07.854601 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fa2dc9e_4884_499d_921c_ac6656e3d300.slice/crio-611438703bbcbaf7f4944bf3b47d131b22cda30eaf0f664e167fce85a7f43bc2 WatchSource:0}: Error finding container 611438703bbcbaf7f4944bf3b47d131b22cda30eaf0f664e167fce85a7f43bc2: Status 404 returned error can't find the container with id 611438703bbcbaf7f4944bf3b47d131b22cda30eaf0f664e167fce85a7f43bc2 Nov 22 09:11:08 crc kubenswrapper[4853]: I1122 09:11:08.812154 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/must-gather-vff7c" event={"ID":"0fa2dc9e-4884-499d-921c-ac6656e3d300","Type":"ContainerStarted","Data":"611438703bbcbaf7f4944bf3b47d131b22cda30eaf0f664e167fce85a7f43bc2"} Nov 22 09:11:17 crc kubenswrapper[4853]: I1122 09:11:17.929134 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/must-gather-vff7c" event={"ID":"0fa2dc9e-4884-499d-921c-ac6656e3d300","Type":"ContainerStarted","Data":"38893a5c5533845f5d723578a8677f657f0131dd3dc20fb9bc5c684aedb6b761"} Nov 22 09:11:17 crc kubenswrapper[4853]: I1122 09:11:17.930816 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/must-gather-vff7c" event={"ID":"0fa2dc9e-4884-499d-921c-ac6656e3d300","Type":"ContainerStarted","Data":"d108c542705a6ea728fe159c191713898223ef44b216f8eb03c739079541c756"} Nov 22 09:11:17 crc kubenswrapper[4853]: I1122 09:11:17.948854 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nq2xj/must-gather-vff7c" podStartSLOduration=1.962888701 podStartE2EDuration="10.948830828s" podCreationTimestamp="2025-11-22 09:11:07 +0000 UTC" firstStartedPulling="2025-11-22 09:11:07.860742985 +0000 UTC m=+7266.701365611" lastFinishedPulling="2025-11-22 09:11:16.846685112 +0000 UTC m=+7275.687307738" observedRunningTime="2025-11-22 09:11:17.943769402 +0000 UTC m=+7276.784392048" watchObservedRunningTime="2025-11-22 09:11:17.948830828 +0000 UTC m=+7276.789453454" Nov 22 09:11:23 crc kubenswrapper[4853]: E1122 09:11:23.932165 4853 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.251:60296->38.102.83.251:37237: read tcp 38.102.83.251:60296->38.102.83.251:37237: read: connection reset by peer Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.739138 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-xzr2w"] Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.741041 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.822687 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.822876 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2b66\" (UniqueName: \"kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.925043 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.925195 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2b66\" (UniqueName: \"kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.925233 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:24 crc kubenswrapper[4853]: I1122 09:11:24.950836 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2b66\" (UniqueName: \"kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66\") pod \"crc-debug-xzr2w\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:25 crc kubenswrapper[4853]: I1122 09:11:25.059978 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:11:26 crc kubenswrapper[4853]: I1122 09:11:26.047316 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" event={"ID":"a936bae1-a05f-4d90-8e05-51ba7b13d271","Type":"ContainerStarted","Data":"209f7a1713831f5667ca7d90bfef5beee76cd317f37b902b4210ea160c3fe27e"} Nov 22 09:11:38 crc kubenswrapper[4853]: I1122 09:11:38.219925 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" event={"ID":"a936bae1-a05f-4d90-8e05-51ba7b13d271","Type":"ContainerStarted","Data":"597227cfc86e95bbae46855fb155d433f593957571b26cb120e198c1fa6baed4"} Nov 22 09:11:38 crc kubenswrapper[4853]: I1122 09:11:38.242035 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" podStartSLOduration=1.7409847520000001 podStartE2EDuration="14.242013885s" podCreationTimestamp="2025-11-22 09:11:24 +0000 UTC" firstStartedPulling="2025-11-22 09:11:25.110539283 +0000 UTC m=+7283.951161909" lastFinishedPulling="2025-11-22 09:11:37.611568416 +0000 UTC m=+7296.452191042" observedRunningTime="2025-11-22 09:11:38.232565942 +0000 UTC m=+7297.073188568" watchObservedRunningTime="2025-11-22 09:11:38.242013885 +0000 UTC m=+7297.082636511" Nov 22 09:12:01 crc kubenswrapper[4853]: I1122 09:12:01.297524 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:12:01 crc kubenswrapper[4853]: I1122 09:12:01.298148 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:12:31 crc kubenswrapper[4853]: I1122 09:12:31.297587 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:12:31 crc kubenswrapper[4853]: I1122 09:12:31.298193 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:12:34 crc kubenswrapper[4853]: I1122 09:12:34.913214 4853 generic.go:334] "Generic (PLEG): container finished" podID="a936bae1-a05f-4d90-8e05-51ba7b13d271" containerID="597227cfc86e95bbae46855fb155d433f593957571b26cb120e198c1fa6baed4" exitCode=0 Nov 22 09:12:34 crc kubenswrapper[4853]: I1122 09:12:34.913306 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" event={"ID":"a936bae1-a05f-4d90-8e05-51ba7b13d271","Type":"ContainerDied","Data":"597227cfc86e95bbae46855fb155d433f593957571b26cb120e198c1fa6baed4"} Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.086165 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.123536 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-xzr2w"] Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.133704 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-xzr2w"] Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.146263 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2b66\" (UniqueName: \"kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66\") pod \"a936bae1-a05f-4d90-8e05-51ba7b13d271\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.146687 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host\") pod \"a936bae1-a05f-4d90-8e05-51ba7b13d271\" (UID: \"a936bae1-a05f-4d90-8e05-51ba7b13d271\") " Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.147062 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host" (OuterVolumeSpecName: "host") pod "a936bae1-a05f-4d90-8e05-51ba7b13d271" (UID: "a936bae1-a05f-4d90-8e05-51ba7b13d271"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.147577 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a936bae1-a05f-4d90-8e05-51ba7b13d271-host\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.153249 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66" (OuterVolumeSpecName: "kube-api-access-g2b66") pod "a936bae1-a05f-4d90-8e05-51ba7b13d271" (UID: "a936bae1-a05f-4d90-8e05-51ba7b13d271"). InnerVolumeSpecName "kube-api-access-g2b66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.250252 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2b66\" (UniqueName: \"kubernetes.io/projected/a936bae1-a05f-4d90-8e05-51ba7b13d271-kube-api-access-g2b66\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.936529 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="209f7a1713831f5667ca7d90bfef5beee76cd317f37b902b4210ea160c3fe27e" Nov 22 09:12:36 crc kubenswrapper[4853]: I1122 09:12:36.936586 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-xzr2w" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.306570 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-gslsb"] Nov 22 09:12:37 crc kubenswrapper[4853]: E1122 09:12:37.307351 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a936bae1-a05f-4d90-8e05-51ba7b13d271" containerName="container-00" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.307364 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="a936bae1-a05f-4d90-8e05-51ba7b13d271" containerName="container-00" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.307604 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="a936bae1-a05f-4d90-8e05-51ba7b13d271" containerName="container-00" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.308429 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.378657 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.378716 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt2cv\" (UniqueName: \"kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.481165 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.481240 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt2cv\" (UniqueName: \"kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.481788 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.503558 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt2cv\" (UniqueName: \"kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv\") pod \"crc-debug-gslsb\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.628937 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.764381 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a936bae1-a05f-4d90-8e05-51ba7b13d271" path="/var/lib/kubelet/pods/a936bae1-a05f-4d90-8e05-51ba7b13d271/volumes" Nov 22 09:12:37 crc kubenswrapper[4853]: I1122 09:12:37.953424 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" event={"ID":"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf","Type":"ContainerStarted","Data":"8c93798b00ae8f321a164f5bdd90591abb57e2fa17f80ec1e4c72b9860589789"} Nov 22 09:12:38 crc kubenswrapper[4853]: I1122 09:12:38.967144 4853 generic.go:334] "Generic (PLEG): container finished" podID="1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" containerID="9c6aed72818c029ec439bfbd9a8e694361168497429f5a6d63162e987293a5f1" exitCode=0 Nov 22 09:12:38 crc kubenswrapper[4853]: I1122 09:12:38.967184 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" event={"ID":"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf","Type":"ContainerDied","Data":"9c6aed72818c029ec439bfbd9a8e694361168497429f5a6d63162e987293a5f1"} Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.126312 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.238292 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt2cv\" (UniqueName: \"kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv\") pod \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.238445 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host\") pod \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\" (UID: \"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf\") " Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.238500 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host" (OuterVolumeSpecName: "host") pod "1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" (UID: "1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.239531 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-host\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.244385 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv" (OuterVolumeSpecName: "kube-api-access-xt2cv") pod "1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" (UID: "1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf"). InnerVolumeSpecName "kube-api-access-xt2cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.352848 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt2cv\" (UniqueName: \"kubernetes.io/projected/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf-kube-api-access-xt2cv\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.994453 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" event={"ID":"1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf","Type":"ContainerDied","Data":"8c93798b00ae8f321a164f5bdd90591abb57e2fa17f80ec1e4c72b9860589789"} Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.994871 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c93798b00ae8f321a164f5bdd90591abb57e2fa17f80ec1e4c72b9860589789" Nov 22 09:12:40 crc kubenswrapper[4853]: I1122 09:12:40.994553 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-gslsb" Nov 22 09:12:41 crc kubenswrapper[4853]: I1122 09:12:41.460977 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-gslsb"] Nov 22 09:12:41 crc kubenswrapper[4853]: I1122 09:12:41.479957 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-gslsb"] Nov 22 09:12:41 crc kubenswrapper[4853]: I1122 09:12:41.773323 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" path="/var/lib/kubelet/pods/1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf/volumes" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.666629 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-n5nn7"] Nov 22 09:12:42 crc kubenswrapper[4853]: E1122 09:12:42.667202 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" containerName="container-00" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.667217 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" containerName="container-00" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.667441 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b98f51c-dbc3-496c-a8ac-dee61fb5d1bf" containerName="container-00" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.668324 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.720517 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf5x\" (UniqueName: \"kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.720955 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.823443 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.823568 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.825023 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txf5x\" (UniqueName: \"kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.849299 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txf5x\" (UniqueName: \"kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x\") pod \"crc-debug-n5nn7\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:42 crc kubenswrapper[4853]: I1122 09:12:42.989395 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:44 crc kubenswrapper[4853]: I1122 09:12:44.026548 4853 generic.go:334] "Generic (PLEG): container finished" podID="b52b3f34-285a-4b0c-955b-a3525ec0c6d2" containerID="f3c3f8a59cf1655545df02685b1b91991f61a89c7d57312e690cef3caa7dc98d" exitCode=0 Nov 22 09:12:44 crc kubenswrapper[4853]: I1122 09:12:44.026869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" event={"ID":"b52b3f34-285a-4b0c-955b-a3525ec0c6d2","Type":"ContainerDied","Data":"f3c3f8a59cf1655545df02685b1b91991f61a89c7d57312e690cef3caa7dc98d"} Nov 22 09:12:44 crc kubenswrapper[4853]: I1122 09:12:44.026894 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" event={"ID":"b52b3f34-285a-4b0c-955b-a3525ec0c6d2","Type":"ContainerStarted","Data":"7b5cb9ed0f476d73011482d75400a5143525631d9b964b0e65a589e13eb42b77"} Nov 22 09:12:44 crc kubenswrapper[4853]: I1122 09:12:44.070969 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-n5nn7"] Nov 22 09:12:44 crc kubenswrapper[4853]: I1122 09:12:44.080510 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nq2xj/crc-debug-n5nn7"] Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.170462 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.286578 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txf5x\" (UniqueName: \"kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x\") pod \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.287152 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host\") pod \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\" (UID: \"b52b3f34-285a-4b0c-955b-a3525ec0c6d2\") " Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.287704 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host" (OuterVolumeSpecName: "host") pod "b52b3f34-285a-4b0c-955b-a3525ec0c6d2" (UID: "b52b3f34-285a-4b0c-955b-a3525ec0c6d2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.292611 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x" (OuterVolumeSpecName: "kube-api-access-txf5x") pod "b52b3f34-285a-4b0c-955b-a3525ec0c6d2" (UID: "b52b3f34-285a-4b0c-955b-a3525ec0c6d2"). InnerVolumeSpecName "kube-api-access-txf5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.390457 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txf5x\" (UniqueName: \"kubernetes.io/projected/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-kube-api-access-txf5x\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.390503 4853 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b52b3f34-285a-4b0c-955b-a3525ec0c6d2-host\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:45 crc kubenswrapper[4853]: I1122 09:12:45.762115 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b52b3f34-285a-4b0c-955b-a3525ec0c6d2" path="/var/lib/kubelet/pods/b52b3f34-285a-4b0c-955b-a3525ec0c6d2/volumes" Nov 22 09:12:46 crc kubenswrapper[4853]: I1122 09:12:46.049066 4853 scope.go:117] "RemoveContainer" containerID="f3c3f8a59cf1655545df02685b1b91991f61a89c7d57312e690cef3caa7dc98d" Nov 22 09:12:46 crc kubenswrapper[4853]: I1122 09:12:46.049139 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/crc-debug-n5nn7" Nov 22 09:13:01 crc kubenswrapper[4853]: I1122 09:13:01.297409 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:13:01 crc kubenswrapper[4853]: I1122 09:13:01.297941 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:13:01 crc kubenswrapper[4853]: I1122 09:13:01.297993 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 09:13:01 crc kubenswrapper[4853]: I1122 09:13:01.298949 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:13:01 crc kubenswrapper[4853]: I1122 09:13:01.299014 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" gracePeriod=600 Nov 22 09:13:01 crc kubenswrapper[4853]: E1122 09:13:01.440403 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:02 crc kubenswrapper[4853]: I1122 09:13:02.457422 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" exitCode=0 Nov 22 09:13:02 crc kubenswrapper[4853]: I1122 09:13:02.457493 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c"} Nov 22 09:13:02 crc kubenswrapper[4853]: I1122 09:13:02.457528 4853 scope.go:117] "RemoveContainer" containerID="b56f5f8bbee1802342bd2faf1f016affa55e29963b282748e5c6267465ea9957" Nov 22 09:13:02 crc kubenswrapper[4853]: I1122 09:13:02.458364 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:13:02 crc kubenswrapper[4853]: E1122 09:13:02.458826 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.528877 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_01a252c2-19bf-4c3d-83d6-685e0c49606d/aodh-api/0.log" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.728344 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_01a252c2-19bf-4c3d-83d6-685e0c49606d/aodh-listener/0.log" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.756277 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_01a252c2-19bf-4c3d-83d6-685e0c49606d/aodh-evaluator/0.log" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.874687 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_01a252c2-19bf-4c3d-83d6-685e0c49606d/aodh-notifier/0.log" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.933995 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d58855874-6hg9r_1ea82711-6541-4717-8711-16a13f6ce28c/barbican-api/0.log" Nov 22 09:13:10 crc kubenswrapper[4853]: I1122 09:13:10.987797 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d58855874-6hg9r_1ea82711-6541-4717-8711-16a13f6ce28c/barbican-api-log/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.150094 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57684d7498-b46f9_0afdca33-fd60-4480-b1f7-29ec0199998e/barbican-keystone-listener/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.297151 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57684d7498-b46f9_0afdca33-fd60-4480-b1f7-29ec0199998e/barbican-keystone-listener-log/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.402995 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5f67678855-5vc2g_b9146abf-7a18-4ae8-a1e8-df3456597edf/barbican-worker-log/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.440933 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5f67678855-5vc2g_b9146abf-7a18-4ae8-a1e8-df3456597edf/barbican-worker/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.592015 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bdlzq_134d3ebf-3b18-46f5-b30e-7856a1a6bc6a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:11 crc kubenswrapper[4853]: I1122 09:13:11.956691 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58a7dcf9-4712-4ffe-90d1-ea827dc02982/ceilometer-central-agent/1.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.128909 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58a7dcf9-4712-4ffe-90d1-ea827dc02982/ceilometer-notification-agent/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.175014 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58a7dcf9-4712-4ffe-90d1-ea827dc02982/proxy-httpd/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.200663 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58a7dcf9-4712-4ffe-90d1-ea827dc02982/ceilometer-central-agent/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.224787 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_58a7dcf9-4712-4ffe-90d1-ea827dc02982/sg-core/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.519221 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_efb7e269-fe2e-45b4-949e-8f862ef94e3c/cinder-api-log/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.526312 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_efb7e269-fe2e-45b4-949e-8f862ef94e3c/cinder-api/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.655990 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ab28049-d7dd-41b2-ae06-95c5a283266a/cinder-scheduler/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.795246 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_2ab28049-d7dd-41b2-ae06-95c5a283266a/probe/0.log" Nov 22 09:13:12 crc kubenswrapper[4853]: I1122 09:13:12.848847 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-b44f2_35333b8f-d62c-4e4d-b8c1-1f0add8b1ec6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.020970 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-t6wjk_9a33fbd8-6d28-4cc6-b1f1-5d90c247f992/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.106905 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-jz5vh_c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2/init/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.269381 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-jz5vh_c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2/init/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.323879 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5596c69fcc-jz5vh_c6e17800-8b9e-4fdd-9b49-a2b48b8cb5a2/dnsmasq-dns/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.349136 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gspf4_eb7c2a78-a864-4f26-ae10-e2f64ff95b0d/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.599500 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5554d3b5-8219-4dc0-9f3e-cb1ee319ef72/glance-log/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.634094 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5554d3b5-8219-4dc0-9f3e-cb1ee319ef72/glance-httpd/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.764420 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_92c28892-dc0c-4bf5-bd5f-1ed4b702852f/glance-httpd/0.log" Nov 22 09:13:13 crc kubenswrapper[4853]: I1122 09:13:13.830333 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_92c28892-dc0c-4bf5-bd5f-1ed4b702852f/glance-log/0.log" Nov 22 09:13:14 crc kubenswrapper[4853]: I1122 09:13:14.541885 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-697c44f7b5-9vpfm_f5b4c3b6-9c73-4976-b412-341704301db3/heat-engine/0.log" Nov 22 09:13:14 crc kubenswrapper[4853]: I1122 09:13:14.562980 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6d654b9979-5pkjs_cb5c6ce8-f8af-4ad3-a004-04c188ba6c92/heat-api/0.log" Nov 22 09:13:14 crc kubenswrapper[4853]: I1122 09:13:14.794151 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-mg62g_48806bf3-8709-441a-bf45-7a89c6ce9b32/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:14 crc kubenswrapper[4853]: I1122 09:13:14.820705 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-c66cc79fb-w5kgp_0bdc440c-227d-43dd-9e9d-500ba10fc239/heat-cfnapi/0.log" Nov 22 09:13:14 crc kubenswrapper[4853]: I1122 09:13:14.933652 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6n78r_77023bdf-69ac-4065-b6de-af12e3477fd9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.024731 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396641-szmsx_12bcd8e0-a04b-49b7-a158-46e8da15bc48/keystone-cron/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.079798 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396701-4ldcl_b7e0bbfc-0e09-4e3c-b337-df9e727db1db/keystone-cron/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.316450 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_00c18e6e-23ef-45c1-b7ce-5efb6d47f001/kube-state-metrics/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.454625 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h8gmz_c7e358cd-fbd6-411d-9231-73e533bbda3b/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.613807 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-wb9bl_f6cbf49f-1ec5-4c85-8220-8b569c9aaa83/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.761237 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-fb8dfc99b-xcccg_15b318bb-8168-4613-8172-f352705a5de1/keystone-api/0.log" Nov 22 09:13:15 crc kubenswrapper[4853]: I1122 09:13:15.913097 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_b06a0baf-5cef-4893-b81c-55aa5930bdf0/mysqld-exporter/0.log" Nov 22 09:13:16 crc kubenswrapper[4853]: I1122 09:13:16.524028 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c78d4ccd7-pvf4q_47723ce1-f48e-4d1d-a0a8-4f49dfce7070/neutron-httpd/0.log" Nov 22 09:13:16 crc kubenswrapper[4853]: I1122 09:13:16.588186 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-h6qf2_988b3ef5-b991-4375-870a-67b6f2beaeac/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:16 crc kubenswrapper[4853]: I1122 09:13:16.649208 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c78d4ccd7-pvf4q_47723ce1-f48e-4d1d-a0a8-4f49dfce7070/neutron-api/0.log" Nov 22 09:13:16 crc kubenswrapper[4853]: I1122 09:13:16.747692 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:13:16 crc kubenswrapper[4853]: E1122 09:13:16.748005 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:17 crc kubenswrapper[4853]: I1122 09:13:17.162769 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_a136d57d-e1f7-46e6-a75e-67bdc93f93ee/nova-cell0-conductor-conductor/0.log" Nov 22 09:13:17 crc kubenswrapper[4853]: I1122 09:13:17.489165 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_db91ab74-937e-4283-816a-1e31d662bc52/nova-cell1-conductor-conductor/0.log" Nov 22 09:13:17 crc kubenswrapper[4853]: I1122 09:13:17.531581 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bfa0c19f-e6cf-4db2-a88c-76388997551c/nova-api-log/0.log" Nov 22 09:13:17 crc kubenswrapper[4853]: I1122 09:13:17.829508 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_85f3f206-5a15-4b98-8af2-4ef0a1ca123a/nova-cell1-novncproxy-novncproxy/0.log" Nov 22 09:13:17 crc kubenswrapper[4853]: I1122 09:13:17.868555 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2fbqm_5ec783e2-47c2-4362-84ac-cdaa7f0b75e5/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.032454 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_bfa0c19f-e6cf-4db2-a88c-76388997551c/nova-api-api/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.172676 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9292105c-7a7d-42cf-a8a1-6074ebebc6f4/nova-metadata-log/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.542541 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_91458107-9648-4958-ae6c-54457f8744f6/nova-scheduler-scheduler/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.615570 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d/mysql-bootstrap/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.843401 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d/galera/0.log" Nov 22 09:13:18 crc kubenswrapper[4853]: I1122 09:13:18.866247 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fd5f90cd-e8e9-489e-b7fd-fde9fd9c342d/mysql-bootstrap/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.133277 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_410e418b-aee9-40c9-96ed-0f8c5c882148/mysql-bootstrap/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.285800 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_410e418b-aee9-40c9-96ed-0f8c5c882148/mysql-bootstrap/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.443486 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_410e418b-aee9-40c9-96ed-0f8c5c882148/galera/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.651111 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_fa95ca8f-6cef-4cbc-bd08-f693a09770dc/openstackclient/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.732884 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gcfs8_2d4565ad-c87f-4e82-bd22-0218b0598651/openstack-network-exporter/0.log" Nov 22 09:13:19 crc kubenswrapper[4853]: I1122 09:13:19.958054 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-nhs2x_05c9113f-59ff-46cc-b704-eb9c8553ad37/ovn-controller/0.log" Nov 22 09:13:20 crc kubenswrapper[4853]: I1122 09:13:20.502843 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k99wz_e573b0f6-8f5e-45a9-b00e-410826a9a36d/ovsdb-server-init/0.log" Nov 22 09:13:20 crc kubenswrapper[4853]: I1122 09:13:20.697053 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k99wz_e573b0f6-8f5e-45a9-b00e-410826a9a36d/ovsdb-server-init/0.log" Nov 22 09:13:20 crc kubenswrapper[4853]: I1122 09:13:20.725399 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k99wz_e573b0f6-8f5e-45a9-b00e-410826a9a36d/ovs-vswitchd/0.log" Nov 22 09:13:20 crc kubenswrapper[4853]: I1122 09:13:20.730138 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k99wz_e573b0f6-8f5e-45a9-b00e-410826a9a36d/ovsdb-server/0.log" Nov 22 09:13:20 crc kubenswrapper[4853]: I1122 09:13:20.944204 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_9292105c-7a7d-42cf-a8a1-6074ebebc6f4/nova-metadata-metadata/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.016026 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-8hs7r_27dca404-f54c-4f96-9ae3-e517c2de3033/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.210854 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4a68a565-fd46-4cac-a300-2e7489e20c4c/openstack-network-exporter/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.325728 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef57a60a-7a73-45c6-8760-7e215eedd374/openstack-network-exporter/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.328814 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4a68a565-fd46-4cac-a300-2e7489e20c4c/ovn-northd/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.494236 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ef57a60a-7a73-45c6-8760-7e215eedd374/ovsdbserver-nb/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.578799 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a9dc9521-7d6a-4622-9a63-9c761ff0721c/openstack-network-exporter/0.log" Nov 22 09:13:21 crc kubenswrapper[4853]: I1122 09:13:21.607742 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a9dc9521-7d6a-4622-9a63-9c761ff0721c/ovsdbserver-sb/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.030469 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8478cc79fb-ggl8b_a9d809e7-9dbc-4c65-96e3-f8d025e97dc4/placement-api/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.059509 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8478cc79fb-ggl8b_a9d809e7-9dbc-4c65-96e3-f8d025e97dc4/placement-log/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.147438 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_78a8c429-b429-44e1-be5e-3eb355ae4d54/init-config-reloader/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.338687 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_78a8c429-b429-44e1-be5e-3eb355ae4d54/init-config-reloader/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.378374 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_78a8c429-b429-44e1-be5e-3eb355ae4d54/thanos-sidecar/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.404422 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_78a8c429-b429-44e1-be5e-3eb355ae4d54/prometheus/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.439089 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_78a8c429-b429-44e1-be5e-3eb355ae4d54/config-reloader/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.760293 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2db00bbf-b98a-40ab-b648-5acdcc430bad/setup-container/0.log" Nov 22 09:13:22 crc kubenswrapper[4853]: I1122 09:13:22.984598 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2db00bbf-b98a-40ab-b648-5acdcc430bad/setup-container/0.log" Nov 22 09:13:23 crc kubenswrapper[4853]: I1122 09:13:23.115949 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8897740c-fa9f-4ecb-83ae-4dc74489745d/setup-container/0.log" Nov 22 09:13:23 crc kubenswrapper[4853]: I1122 09:13:23.161537 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2db00bbf-b98a-40ab-b648-5acdcc430bad/rabbitmq/0.log" Nov 22 09:13:23 crc kubenswrapper[4853]: I1122 09:13:23.541924 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8897740c-fa9f-4ecb-83ae-4dc74489745d/setup-container/0.log" Nov 22 09:13:23 crc kubenswrapper[4853]: I1122 09:13:23.685076 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_8897740c-fa9f-4ecb-83ae-4dc74489745d/rabbitmq/0.log" Nov 22 09:13:23 crc kubenswrapper[4853]: I1122 09:13:23.737226 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-wbvm4_bc8cf0db-b7d8-49ca-9936-6a31e37bdcf3/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:24 crc kubenswrapper[4853]: I1122 09:13:24.015251 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hh6z8_34172255-8ec0-4d57-97ab-0ec632e7ae64/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:24 crc kubenswrapper[4853]: I1122 09:13:24.136168 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2xqrx_35498c08-898b-477d-88eb-3cf82e3696e7/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:24 crc kubenswrapper[4853]: I1122 09:13:24.285703 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-zc5md_1ae307fc-7b82-4d0c-8bdf-1af3c349634b/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:24 crc kubenswrapper[4853]: I1122 09:13:24.937608 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-84bxn_e0ae265d-0731-4195-9f31-7bf77627fadd/ssh-known-hosts-edpm-deployment/0.log" Nov 22 09:13:25 crc kubenswrapper[4853]: I1122 09:13:25.152455 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4b8c7cc5-lxts4_759aa807-9e0a-4af1-bfec-8a04df8a8928/proxy-server/0.log" Nov 22 09:13:25 crc kubenswrapper[4853]: I1122 09:13:25.284522 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6f4b8c7cc5-lxts4_759aa807-9e0a-4af1-bfec-8a04df8a8928/proxy-httpd/0.log" Nov 22 09:13:25 crc kubenswrapper[4853]: I1122 09:13:25.321508 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-b8h4v_7268d91f-27a0-45a1-8239-b6bdc8736b4b/swift-ring-rebalance/0.log" Nov 22 09:13:26 crc kubenswrapper[4853]: I1122 09:13:26.204073 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/account-auditor/0.log" Nov 22 09:13:26 crc kubenswrapper[4853]: I1122 09:13:26.325895 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/account-reaper/0.log" Nov 22 09:13:26 crc kubenswrapper[4853]: I1122 09:13:26.343171 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/account-replicator/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.088645 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/account-server/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.297667 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/container-server/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.323188 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/container-replicator/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.393019 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/container-auditor/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.418794 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/container-updater/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.583400 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/object-auditor/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.648451 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/object-server/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.680071 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/object-expirer/0.log" Nov 22 09:13:27 crc kubenswrapper[4853]: I1122 09:13:27.713289 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/object-replicator/0.log" Nov 22 09:13:28 crc kubenswrapper[4853]: I1122 09:13:28.469848 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/object-updater/0.log" Nov 22 09:13:28 crc kubenswrapper[4853]: I1122 09:13:28.611208 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/swift-recon-cron/0.log" Nov 22 09:13:28 crc kubenswrapper[4853]: I1122 09:13:28.670489 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d4427668-9ef6-4594-ae35-ff983a6af324/rsync/0.log" Nov 22 09:13:28 crc kubenswrapper[4853]: I1122 09:13:28.964926 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-gg5c9_b9d13e92-cc8c-45a2-a122-0af7c97fe7e6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:29 crc kubenswrapper[4853]: I1122 09:13:29.155309 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-mhmgx_e68d04f1-7a40-4197-b65b-2be6e53f9ff3/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:29 crc kubenswrapper[4853]: I1122 09:13:29.480727 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e203f149-c6fd-489f-b75a-d1dcded1fbdb/test-operator-logs-container/0.log" Nov 22 09:13:29 crc kubenswrapper[4853]: I1122 09:13:29.678338 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-vg8lx_04ca2a66-41e4-4a7b-8df7-8fcf34adeb8b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 22 09:13:29 crc kubenswrapper[4853]: I1122 09:13:29.747702 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:13:29 crc kubenswrapper[4853]: E1122 09:13:29.748255 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:30 crc kubenswrapper[4853]: I1122 09:13:30.609697 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_1f255ef5-a59e-42c4-9ac7-ff33562499f6/tempest-tests-tempest-tests-runner/0.log" Nov 22 09:13:35 crc kubenswrapper[4853]: I1122 09:13:35.821919 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_b64e6703-1b51-477a-8898-3646dbf7b00c/memcached/0.log" Nov 22 09:13:41 crc kubenswrapper[4853]: I1122 09:13:41.748382 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:13:41 crc kubenswrapper[4853]: E1122 09:13:41.749291 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:53 crc kubenswrapper[4853]: I1122 09:13:53.748612 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:13:53 crc kubenswrapper[4853]: E1122 09:13:53.749490 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.067771 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/util/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.260010 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/util/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.295820 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/pull/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.301070 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/pull/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.419653 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/util/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.453246 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/pull/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.493127 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_973bb835abccb9fbfc9859ca03efea68630f1af489fab59cfe5ef270d9hnfdh_c34b4fe8-797b-4fa9-8fb0-b17426dbcb1a/extract/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.607738 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-nf2bz_f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43/kube-rbac-proxy/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.738582 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-75fb479bcc-nf2bz_f0fa8b73-0604-41c5-9dfd-ea2f3ca36c43/manager/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.743271 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-cjqxx_74c3e58c-6a8f-462f-a595-28db25f9e2c5/kube-rbac-proxy/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.891472 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6498cbf48f-cjqxx_74c3e58c-6a8f-462f-a595-28db25f9e2c5/manager/0.log" Nov 22 09:13:55 crc kubenswrapper[4853]: I1122 09:13:55.977931 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-pfmkd_7ed40441-44d2-497f-93e7-d85116790d61/manager/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.001101 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-767ccfd65f-pfmkd_7ed40441-44d2-497f-93e7-d85116790d61/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.165731 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-8kl4r_9a6ac321-fea5-4011-9112-60695ec2d996/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.257246 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-7969689c84-8kl4r_9a6ac321-fea5-4011-9112-60695ec2d996/manager/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.318733 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-ww42j_59095c24-fa32-4f44-b7d0-593b1291cf56/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.483015 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-56f54d6746-ww42j_59095c24-fa32-4f44-b7d0-593b1291cf56/manager/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.520892 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-l654j_511dcee7-13c9-45ca-b12f-3330fb1b14bc/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.574765 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-598f69df5d-l654j_511dcee7-13c9-45ca-b12f-3330fb1b14bc/manager/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.713613 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-4wjmn_674f240d-b9b1-488a-b6bf-d6231529cf4d/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.893462 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6dd8864d7c-4wjmn_674f240d-b9b1-488a-b6bf-d6231529cf4d/manager/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.938264 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-km4bs_8a902288-c5fa-4106-89dc-dad1ed8fff47/kube-rbac-proxy/0.log" Nov 22 09:13:56 crc kubenswrapper[4853]: I1122 09:13:56.954427 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-99b499f4-km4bs_8a902288-c5fa-4106-89dc-dad1ed8fff47/manager/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.080888 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h5674_524f1308-44b0-4603-b612-eb02450cd46d/kube-rbac-proxy/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.230362 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7454b96578-h5674_524f1308-44b0-4603-b612-eb02450cd46d/manager/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.270637 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-f4bvx_798dacb1-9a2f-4f77-a55e-1f005447a5ec/manager/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.296419 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58f887965d-f4bvx_798dacb1-9a2f-4f77-a55e-1f005447a5ec/kube-rbac-proxy/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.430626 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-hbqk8_d1a5f3b8-6d7d-4955-9973-c743f0b16dc5/kube-rbac-proxy/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.545520 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-54b5986bb8-hbqk8_d1a5f3b8-6d7d-4955-9973-c743f0b16dc5/manager/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.603848 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-mxwrm_05971821-7368-4352-8955-bd9432958c9b/kube-rbac-proxy/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.709408 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78bd47f458-mxwrm_05971821-7368-4352-8955-bd9432958c9b/manager/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.800084 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-gwqp2_51d7517d-674b-4d91-bb05-89e11ce77ee8/kube-rbac-proxy/0.log" Nov 22 09:13:57 crc kubenswrapper[4853]: I1122 09:13:57.952621 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-cfbb9c588-gwqp2_51d7517d-674b-4d91-bb05-89e11ce77ee8/manager/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.062624 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-vsftr_6e35d4a2-bb72-4396-83e0-4a9ba4d9274b/kube-rbac-proxy/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.070769 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-54cfbf4c7d-vsftr_6e35d4a2-bb72-4396-83e0-4a9ba4d9274b/manager/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.198546 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d_8774b599-7d20-4c58-9441-821beca48884/kube-rbac-proxy/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.278648 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8c7444f48-dgc8d_8774b599-7d20-4c58-9441-821beca48884/manager/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.413921 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-88b7b5d44-zjv7m_b41bf5e6-516e-40b8-9628-bb2f056af5ad/kube-rbac-proxy/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.594077 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5b84778f4-fdshc_f0d8fc3e-45fa-4672-8641-d88a56c44708/kube-rbac-proxy/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.798774 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5b84778f4-fdshc_f0d8fc3e-45fa-4672-8641-d88a56c44708/operator/0.log" Nov 22 09:13:58 crc kubenswrapper[4853]: I1122 09:13:58.908183 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5zz9t_6f38e035-c5c5-49a3-a3a3-b592747e7948/registry-server/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.089707 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-cm2jj_ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4/kube-rbac-proxy/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.236254 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-54fc5f65b7-cm2jj_ac1b2ef9-7ff0-4a11-b8c6-89f6ed7c0dd4/manager/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.315728 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-5lw59_242375a1-78b5-4540-9e93-ad4ef21b67c8/kube-rbac-proxy/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.482795 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b797b8dff-5lw59_242375a1-78b5-4540-9e93-ad4ef21b67c8/manager/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.608703 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-fdt65_131c2522-8c48-4c18-9a39-99a66b87b9ed/operator/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.785821 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-fgtmg_c82379b6-72f2-4474-8714-64f9e6ea7bf7/kube-rbac-proxy/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.820589 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d656998f4-fgtmg_c82379b6-72f2-4474-8714-64f9e6ea7bf7/manager/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.872083 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-88b7b5d44-zjv7m_b41bf5e6-516e-40b8-9628-bb2f056af5ad/manager/0.log" Nov 22 09:13:59 crc kubenswrapper[4853]: I1122 09:13:59.882428 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b477b5977-7gkdk_e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e/kube-rbac-proxy/0.log" Nov 22 09:14:00 crc kubenswrapper[4853]: I1122 09:14:00.142321 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-wmm95_46e379f1-feb8-460a-8448-066bb8f54330/kube-rbac-proxy/0.log" Nov 22 09:14:00 crc kubenswrapper[4853]: I1122 09:14:00.168357 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-b4c496f69-wmm95_46e379f1-feb8-460a-8448-066bb8f54330/manager/0.log" Nov 22 09:14:00 crc kubenswrapper[4853]: I1122 09:14:00.331218 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b477b5977-7gkdk_e1bbcb38-bfc4-4b92-9fa2-5bb3cebfcd5e/manager/0.log" Nov 22 09:14:00 crc kubenswrapper[4853]: I1122 09:14:00.381493 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-fcl7j_67499981-fc7e-4b6d-ab2b-46b528a165a5/manager/0.log" Nov 22 09:14:00 crc kubenswrapper[4853]: I1122 09:14:00.413019 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-8c6448b9f-fcl7j_67499981-fc7e-4b6d-ab2b-46b528a165a5/kube-rbac-proxy/0.log" Nov 22 09:14:07 crc kubenswrapper[4853]: I1122 09:14:07.749241 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:14:07 crc kubenswrapper[4853]: E1122 09:14:07.750390 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:14:09 crc kubenswrapper[4853]: I1122 09:14:09.326844 4853 scope.go:117] "RemoveContainer" containerID="ab8888d08688c17d787ddaa6154ff81621e44813f3367bd480331f8c55f97ba5" Nov 22 09:14:09 crc kubenswrapper[4853]: I1122 09:14:09.372100 4853 scope.go:117] "RemoveContainer" containerID="c48b3a4d37869e97d96977f44939ead5160adab4e24d341c449ae4ad6b3d9457" Nov 22 09:14:16 crc kubenswrapper[4853]: I1122 09:14:16.455436 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hktm5_2eb41230-c219-4968-a240-36db37f3d772/control-plane-machine-set-operator/0.log" Nov 22 09:14:16 crc kubenswrapper[4853]: I1122 09:14:16.662837 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rh6fb_065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c/kube-rbac-proxy/0.log" Nov 22 09:14:16 crc kubenswrapper[4853]: I1122 09:14:16.747369 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rh6fb_065d5bdc-7e13-4a03-aa2d-5b7dd3b3938c/machine-api-operator/0.log" Nov 22 09:14:22 crc kubenswrapper[4853]: I1122 09:14:22.748526 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:14:22 crc kubenswrapper[4853]: E1122 09:14:22.749627 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:14:28 crc kubenswrapper[4853]: I1122 09:14:28.102356 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-k2kt8_accf8a72-f739-4535-b3a9-1303923fe009/cert-manager-controller/0.log" Nov 22 09:14:28 crc kubenswrapper[4853]: I1122 09:14:28.267875 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-f7gf2_8e6fca36-41ef-436e-a917-ed8f248db72f/cert-manager-webhook/0.log" Nov 22 09:14:28 crc kubenswrapper[4853]: I1122 09:14:28.287328 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-fzgb6_edcc88ca-0ffa-4e1a-83b2-97df4f92a493/cert-manager-cainjector/0.log" Nov 22 09:14:36 crc kubenswrapper[4853]: I1122 09:14:36.748729 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:14:36 crc kubenswrapper[4853]: E1122 09:14:36.749471 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.344798 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:37 crc kubenswrapper[4853]: E1122 09:14:37.347561 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52b3f34-285a-4b0c-955b-a3525ec0c6d2" containerName="container-00" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.347581 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52b3f34-285a-4b0c-955b-a3525ec0c6d2" containerName="container-00" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.347839 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52b3f34-285a-4b0c-955b-a3525ec0c6d2" containerName="container-00" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.350335 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.358400 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.418893 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88vq\" (UniqueName: \"kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.419089 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.419341 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.521094 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.521245 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.521360 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88vq\" (UniqueName: \"kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.521823 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.521823 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.549457 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88vq\" (UniqueName: \"kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq\") pod \"certified-operators-m6qjw\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:37 crc kubenswrapper[4853]: I1122 09:14:37.676929 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:39 crc kubenswrapper[4853]: I1122 09:14:39.030895 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:39 crc kubenswrapper[4853]: I1122 09:14:39.655341 4853 generic.go:334] "Generic (PLEG): container finished" podID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerID="305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682" exitCode=0 Nov 22 09:14:39 crc kubenswrapper[4853]: I1122 09:14:39.655461 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerDied","Data":"305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682"} Nov 22 09:14:39 crc kubenswrapper[4853]: I1122 09:14:39.655662 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerStarted","Data":"a2e9a64ccd7b61b58fb157d78db2fac06cf74d979b70b410b9eba130acdcb6e6"} Nov 22 09:14:39 crc kubenswrapper[4853]: I1122 09:14:39.705302 4853 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.147526 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-ng6rq_cd806fb8-97b3-4c27-95d5-0366665151db/nmstate-console-plugin/0.log" Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.359196 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fm7n6_8c632bb5-f4e5-43b2-b6ce-6a7bede629f8/nmstate-handler/0.log" Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.392458 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-tbl28_5aed800e-6ddc-444e-b9d9-2440106297c3/kube-rbac-proxy/0.log" Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.392722 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-tbl28_5aed800e-6ddc-444e-b9d9-2440106297c3/nmstate-metrics/0.log" Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.581590 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-6xs5t_e7bd6d34-77b5-4daf-b00f-6f7101d0ebbe/nmstate-operator/0.log" Nov 22 09:14:40 crc kubenswrapper[4853]: I1122 09:14:40.680927 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-h59d2_325ff591-591f-4b66-adbb-fdc7e20a553d/nmstate-webhook/0.log" Nov 22 09:14:41 crc kubenswrapper[4853]: I1122 09:14:41.676869 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerStarted","Data":"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9"} Nov 22 09:14:43 crc kubenswrapper[4853]: I1122 09:14:43.699700 4853 generic.go:334] "Generic (PLEG): container finished" podID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerID="da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9" exitCode=0 Nov 22 09:14:43 crc kubenswrapper[4853]: I1122 09:14:43.699809 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerDied","Data":"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9"} Nov 22 09:14:44 crc kubenswrapper[4853]: I1122 09:14:44.715085 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerStarted","Data":"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438"} Nov 22 09:14:44 crc kubenswrapper[4853]: I1122 09:14:44.734915 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m6qjw" podStartSLOduration=3.28392284 podStartE2EDuration="7.734892528s" podCreationTimestamp="2025-11-22 09:14:37 +0000 UTC" firstStartedPulling="2025-11-22 09:14:39.657956761 +0000 UTC m=+7478.498579377" lastFinishedPulling="2025-11-22 09:14:44.108926439 +0000 UTC m=+7482.949549065" observedRunningTime="2025-11-22 09:14:44.732256547 +0000 UTC m=+7483.572879183" watchObservedRunningTime="2025-11-22 09:14:44.734892528 +0000 UTC m=+7483.575515144" Nov 22 09:14:47 crc kubenswrapper[4853]: I1122 09:14:47.677858 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:47 crc kubenswrapper[4853]: I1122 09:14:47.678500 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:47 crc kubenswrapper[4853]: I1122 09:14:47.729075 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:47 crc kubenswrapper[4853]: I1122 09:14:47.748570 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:14:47 crc kubenswrapper[4853]: E1122 09:14:47.748946 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:14:52 crc kubenswrapper[4853]: I1122 09:14:52.594335 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/manager/2.log" Nov 22 09:14:52 crc kubenswrapper[4853]: I1122 09:14:52.636210 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/kube-rbac-proxy/0.log" Nov 22 09:14:52 crc kubenswrapper[4853]: I1122 09:14:52.773275 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/manager/1.log" Nov 22 09:14:57 crc kubenswrapper[4853]: I1122 09:14:57.724408 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:57 crc kubenswrapper[4853]: I1122 09:14:57.780008 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:57 crc kubenswrapper[4853]: I1122 09:14:57.860352 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m6qjw" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="registry-server" containerID="cri-o://dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438" gracePeriod=2 Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.414580 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.465807 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content\") pod \"9d53c459-1130-48c2-a70c-4e7601c340ba\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.466035 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities\") pod \"9d53c459-1130-48c2-a70c-4e7601c340ba\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.466102 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h88vq\" (UniqueName: \"kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq\") pod \"9d53c459-1130-48c2-a70c-4e7601c340ba\" (UID: \"9d53c459-1130-48c2-a70c-4e7601c340ba\") " Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.467108 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities" (OuterVolumeSpecName: "utilities") pod "9d53c459-1130-48c2-a70c-4e7601c340ba" (UID: "9d53c459-1130-48c2-a70c-4e7601c340ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.496547 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq" (OuterVolumeSpecName: "kube-api-access-h88vq") pod "9d53c459-1130-48c2-a70c-4e7601c340ba" (UID: "9d53c459-1130-48c2-a70c-4e7601c340ba"). InnerVolumeSpecName "kube-api-access-h88vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.568687 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.568733 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h88vq\" (UniqueName: \"kubernetes.io/projected/9d53c459-1130-48c2-a70c-4e7601c340ba-kube-api-access-h88vq\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.592971 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d53c459-1130-48c2-a70c-4e7601c340ba" (UID: "9d53c459-1130-48c2-a70c-4e7601c340ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.671833 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d53c459-1130-48c2-a70c-4e7601c340ba-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.882601 4853 generic.go:334] "Generic (PLEG): container finished" podID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerID="dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438" exitCode=0 Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.882661 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerDied","Data":"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438"} Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.883905 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m6qjw" event={"ID":"9d53c459-1130-48c2-a70c-4e7601c340ba","Type":"ContainerDied","Data":"a2e9a64ccd7b61b58fb157d78db2fac06cf74d979b70b410b9eba130acdcb6e6"} Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.884061 4853 scope.go:117] "RemoveContainer" containerID="dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.882678 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m6qjw" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.907939 4853 scope.go:117] "RemoveContainer" containerID="da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9" Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.932046 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.946615 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m6qjw"] Nov 22 09:14:58 crc kubenswrapper[4853]: I1122 09:14:58.947161 4853 scope.go:117] "RemoveContainer" containerID="305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.001865 4853 scope.go:117] "RemoveContainer" containerID="dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438" Nov 22 09:14:59 crc kubenswrapper[4853]: E1122 09:14:59.004146 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438\": container with ID starting with dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438 not found: ID does not exist" containerID="dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.004190 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438"} err="failed to get container status \"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438\": rpc error: code = NotFound desc = could not find container \"dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438\": container with ID starting with dded8a4a8fddc588d0490cff397e86f878821bbfd82af1d15e1670a5b8511438 not found: ID does not exist" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.004214 4853 scope.go:117] "RemoveContainer" containerID="da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9" Nov 22 09:14:59 crc kubenswrapper[4853]: E1122 09:14:59.004775 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9\": container with ID starting with da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9 not found: ID does not exist" containerID="da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.004808 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9"} err="failed to get container status \"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9\": rpc error: code = NotFound desc = could not find container \"da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9\": container with ID starting with da00729c618fde10c28ec76d57e40beae38ada66afebb5b78df988d0999575e9 not found: ID does not exist" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.004848 4853 scope.go:117] "RemoveContainer" containerID="305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682" Nov 22 09:14:59 crc kubenswrapper[4853]: E1122 09:14:59.005179 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682\": container with ID starting with 305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682 not found: ID does not exist" containerID="305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.005204 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682"} err="failed to get container status \"305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682\": rpc error: code = NotFound desc = could not find container \"305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682\": container with ID starting with 305cc3d6f783cfe6a7bc9073a52a97cb7232340fc9e50095aad42f366a369682 not found: ID does not exist" Nov 22 09:14:59 crc kubenswrapper[4853]: I1122 09:14:59.763803 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" path="/var/lib/kubelet/pods/9d53c459-1130-48c2-a70c-4e7601c340ba/volumes" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.400378 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67"] Nov 22 09:15:00 crc kubenswrapper[4853]: E1122 09:15:00.429712 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="extract-content" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.429769 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="extract-content" Nov 22 09:15:00 crc kubenswrapper[4853]: E1122 09:15:00.429798 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.429806 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4853]: E1122 09:15:00.429824 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="extract-utilities" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.429831 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="extract-utilities" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.430619 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d53c459-1130-48c2-a70c-4e7601c340ba" containerName="registry-server" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.443412 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.453968 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67"] Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.479917 4853 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.479995 4853 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.527587 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bcd7\" (UniqueName: \"kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.527672 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.527945 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.629354 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bcd7\" (UniqueName: \"kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.629431 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.629485 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.631061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.635415 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.647929 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bcd7\" (UniqueName: \"kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7\") pod \"collect-profiles-29396715-46f67\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:00 crc kubenswrapper[4853]: I1122 09:15:00.788151 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:01 crc kubenswrapper[4853]: I1122 09:15:01.265226 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67"] Nov 22 09:15:01 crc kubenswrapper[4853]: I1122 09:15:01.749041 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:15:01 crc kubenswrapper[4853]: E1122 09:15:01.749652 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:15:01 crc kubenswrapper[4853]: I1122 09:15:01.920502 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" event={"ID":"04bb15d6-7194-4e24-9514-d9745bbbefdc","Type":"ContainerStarted","Data":"b125da271e610ce5b0146b07c3bdee3d0ded7140c437a121d40767172bac4349"} Nov 22 09:15:01 crc kubenswrapper[4853]: I1122 09:15:01.921257 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" event={"ID":"04bb15d6-7194-4e24-9514-d9745bbbefdc","Type":"ContainerStarted","Data":"1f6a521efc5fa1d5b4fc0b786e4c05c5e76aa3ac7ed36ca01f5cdd46d1f7f0b7"} Nov 22 09:15:01 crc kubenswrapper[4853]: I1122 09:15:01.944528 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" podStartSLOduration=1.9445108439999998 podStartE2EDuration="1.944510844s" podCreationTimestamp="2025-11-22 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:15:01.940840576 +0000 UTC m=+7500.781463212" watchObservedRunningTime="2025-11-22 09:15:01.944510844 +0000 UTC m=+7500.785133470" Nov 22 09:15:03 crc kubenswrapper[4853]: I1122 09:15:03.955132 4853 generic.go:334] "Generic (PLEG): container finished" podID="04bb15d6-7194-4e24-9514-d9745bbbefdc" containerID="b125da271e610ce5b0146b07c3bdee3d0ded7140c437a121d40767172bac4349" exitCode=0 Nov 22 09:15:03 crc kubenswrapper[4853]: I1122 09:15:03.955250 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" event={"ID":"04bb15d6-7194-4e24-9514-d9745bbbefdc","Type":"ContainerDied","Data":"b125da271e610ce5b0146b07c3bdee3d0ded7140c437a121d40767172bac4349"} Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.373113 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.570568 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume\") pod \"04bb15d6-7194-4e24-9514-d9745bbbefdc\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.570892 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume\") pod \"04bb15d6-7194-4e24-9514-d9745bbbefdc\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.570966 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bcd7\" (UniqueName: \"kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7\") pod \"04bb15d6-7194-4e24-9514-d9745bbbefdc\" (UID: \"04bb15d6-7194-4e24-9514-d9745bbbefdc\") " Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.571490 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume" (OuterVolumeSpecName: "config-volume") pod "04bb15d6-7194-4e24-9514-d9745bbbefdc" (UID: "04bb15d6-7194-4e24-9514-d9745bbbefdc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.572051 4853 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04bb15d6-7194-4e24-9514-d9745bbbefdc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.577607 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7" (OuterVolumeSpecName: "kube-api-access-2bcd7") pod "04bb15d6-7194-4e24-9514-d9745bbbefdc" (UID: "04bb15d6-7194-4e24-9514-d9745bbbefdc"). InnerVolumeSpecName "kube-api-access-2bcd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.578528 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04bb15d6-7194-4e24-9514-d9745bbbefdc" (UID: "04bb15d6-7194-4e24-9514-d9745bbbefdc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.673904 4853 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04bb15d6-7194-4e24-9514-d9745bbbefdc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:05 crc kubenswrapper[4853]: I1122 09:15:05.673955 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bcd7\" (UniqueName: \"kubernetes.io/projected/04bb15d6-7194-4e24-9514-d9745bbbefdc-kube-api-access-2bcd7\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:06 crc kubenswrapper[4853]: I1122 09:15:06.003381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" event={"ID":"04bb15d6-7194-4e24-9514-d9745bbbefdc","Type":"ContainerDied","Data":"1f6a521efc5fa1d5b4fc0b786e4c05c5e76aa3ac7ed36ca01f5cdd46d1f7f0b7"} Nov 22 09:15:06 crc kubenswrapper[4853]: I1122 09:15:06.003433 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f6a521efc5fa1d5b4fc0b786e4c05c5e76aa3ac7ed36ca01f5cdd46d1f7f0b7" Nov 22 09:15:06 crc kubenswrapper[4853]: I1122 09:15:06.003504 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-46f67" Nov 22 09:15:06 crc kubenswrapper[4853]: I1122 09:15:06.675494 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg"] Nov 22 09:15:06 crc kubenswrapper[4853]: I1122 09:15:06.693051 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-s6dwg"] Nov 22 09:15:07 crc kubenswrapper[4853]: I1122 09:15:07.879841 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5019d7-51a0-456b-8e17-4ce585ac6bb9" path="/var/lib/kubelet/pods/7f5019d7-51a0-456b-8e17-4ce585ac6bb9/volumes" Nov 22 09:15:07 crc kubenswrapper[4853]: I1122 09:15:07.882611 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-j4wf4_fca85b6a-849a-4786-baa9-102f9651efb7/cluster-logging-operator/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.082151 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ls955_69d2b2f1-e056-4d20-b7a1-8d9ae5492cfd/collector/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.168417 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_b876ce6f-c01a-4d2e-813c-abb6fff4a4e2/loki-compactor/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.355085 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-bfp4g_2eba451f-bb08-4f70-ad59-aa64a216f265/loki-distributor/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.446964 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76bd965446-l8bwp_5729c668-8833-48b4-9e48-bcf753621ff7/gateway/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.473572 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76bd965446-l8bwp_5729c668-8833-48b4-9e48-bcf753621ff7/opa/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.659205 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76bd965446-ndqqx_f680143e-738e-4726-bd8e-1f14bf3f4eaa/gateway/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.663629 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76bd965446-ndqqx_f680143e-738e-4726-bd8e-1f14bf3f4eaa/opa/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.783322 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_e7f66807-e021-4fa5-bf10-9b2107788d3d/loki-index-gateway/0.log" Nov 22 09:15:08 crc kubenswrapper[4853]: I1122 09:15:08.988728 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_f6d37108-c1bc-4250-ba04-924fe0dabff3/loki-ingester/0.log" Nov 22 09:15:09 crc kubenswrapper[4853]: I1122 09:15:09.040216 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-rs9nq_13c77518-7ced-4c03-a300-d00ec52fa068/loki-querier/0.log" Nov 22 09:15:09 crc kubenswrapper[4853]: I1122 09:15:09.202425 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-fq9dp_b5a31aa4-7663-4b33-9a60-9b7bb676419a/loki-query-frontend/0.log" Nov 22 09:15:09 crc kubenswrapper[4853]: I1122 09:15:09.462558 4853 scope.go:117] "RemoveContainer" containerID="c021c2a3e9ca9f9178d5f98ac78f9c18729eff1bca26f9d4f6ac4e0e7162d7a0" Nov 22 09:15:13 crc kubenswrapper[4853]: I1122 09:15:13.748108 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:15:13 crc kubenswrapper[4853]: E1122 09:15:13.748813 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.323413 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-lnt7f_5e6933c1-fd3f-45a0-819f-1794ed7fc6b4/kube-rbac-proxy/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.519108 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-lnt7f_5e6933c1-fd3f-45a0-819f-1794ed7fc6b4/controller/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.649484 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-frr-files/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.776310 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-frr-files/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.808038 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-reloader/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.820821 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-metrics/0.log" Nov 22 09:15:23 crc kubenswrapper[4853]: I1122 09:15:23.900118 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-reloader/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.104483 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-frr-files/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.113248 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-reloader/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.122187 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-metrics/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.175259 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-metrics/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.411913 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-reloader/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.426260 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-frr-files/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.447969 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/cp-metrics/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.453712 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/controller/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.668154 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/kube-rbac-proxy/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.683503 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/frr-metrics/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.695207 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/kube-rbac-proxy-frr/0.log" Nov 22 09:15:24 crc kubenswrapper[4853]: I1122 09:15:24.904665 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/reloader/0.log" Nov 22 09:15:25 crc kubenswrapper[4853]: I1122 09:15:25.063888 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-n8cs5_f1c2a1c6-4546-4933-8569-ca5e7180cd85/frr-k8s-webhook-server/0.log" Nov 22 09:15:25 crc kubenswrapper[4853]: I1122 09:15:25.230345 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-559f7d85b8-xtjfd_b7cfa3a7-05d9-4822-9fda-8316c75ee9a4/manager/1.log" Nov 22 09:15:25 crc kubenswrapper[4853]: I1122 09:15:25.396138 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-559f7d85b8-xtjfd_b7cfa3a7-05d9-4822-9fda-8316c75ee9a4/manager/0.log" Nov 22 09:15:25 crc kubenswrapper[4853]: I1122 09:15:25.604211 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c89cb79d4-kkj69_03b1c43b-94f9-4df8-9e17-12b1fbc5a544/webhook-server/0.log" Nov 22 09:15:25 crc kubenswrapper[4853]: I1122 09:15:25.817685 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w8tbx_a0e4024e-8048-4b57-becd-2866e3409a4b/kube-rbac-proxy/0.log" Nov 22 09:15:27 crc kubenswrapper[4853]: I1122 09:15:27.119540 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bt2b5_a5bf1d2e-4694-4ec6-a2de-e35821a73625/frr/0.log" Nov 22 09:15:27 crc kubenswrapper[4853]: I1122 09:15:27.748103 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:15:27 crc kubenswrapper[4853]: E1122 09:15:27.748713 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:15:28 crc kubenswrapper[4853]: I1122 09:15:28.116043 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w8tbx_a0e4024e-8048-4b57-becd-2866e3409a4b/speaker/0.log" Nov 22 09:15:39 crc kubenswrapper[4853]: I1122 09:15:39.569052 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/util/0.log" Nov 22 09:15:39 crc kubenswrapper[4853]: I1122 09:15:39.748433 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:15:39 crc kubenswrapper[4853]: E1122 09:15:39.748808 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:15:39 crc kubenswrapper[4853]: I1122 09:15:39.771988 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/pull/0.log" Nov 22 09:15:39 crc kubenswrapper[4853]: I1122 09:15:39.823604 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/pull/0.log" Nov 22 09:15:39 crc kubenswrapper[4853]: I1122 09:15:39.933610 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/util/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.009979 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/util/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.053179 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/extract/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.106309 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8npjkh_1f6ecf03-88ce-4001-8a9f-2cf202d8d6a0/pull/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.219738 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/util/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.392789 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/pull/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.426739 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/util/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.434276 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/pull/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.639433 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/util/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.688295 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/extract/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.765691 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ehg4v5_73d5ee3a-0ffe-4c7d-abe8-28a6d07aad64/pull/0.log" Nov 22 09:15:40 crc kubenswrapper[4853]: I1122 09:15:40.837469 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/util/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.067537 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/pull/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.095287 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/pull/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.130211 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/util/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.236474 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/util/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.276893 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/pull/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.296660 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92107q65h_683f3f0d-d7fe-42b9-8deb-2358f0f8d572/extract/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.454560 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/util/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.879669 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/pull/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.892227 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/util/0.log" Nov 22 09:15:41 crc kubenswrapper[4853]: I1122 09:15:41.899311 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/pull/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.070928 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/util/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.163286 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/extract/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.203644 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fc92vc_2f822997-9c6e-4132-b606-11e336e2f4af/pull/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.262578 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-utilities/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.464298 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-content/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.469300 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-utilities/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.485384 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-content/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.703689 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-utilities/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.765669 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/extract-content/0.log" Nov 22 09:15:42 crc kubenswrapper[4853]: I1122 09:15:42.975776 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-utilities/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.184846 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-content/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.231086 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-utilities/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.252732 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-content/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.501726 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-utilities/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.502520 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/extract-content/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.770573 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/util/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.885022 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/util/0.log" Nov 22 09:15:43 crc kubenswrapper[4853]: I1122 09:15:43.949555 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/pull/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.074363 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/pull/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.293094 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/extract/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.347912 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/pull/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.381704 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6st98x_3f7e0026-3c37-470d-b2b7-cf742c742854/util/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.589077 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8dxcs_cdc57f0c-a9c1-4b48-9a08-209f3a27727f/registry-server/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.703290 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nr2sr_c54d72ed-4fd1-4c17-a3ac-ba1e743e2307/marketplace-operator/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.733273 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-utilities/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.981065 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-utilities/0.log" Nov 22 09:15:44 crc kubenswrapper[4853]: I1122 09:15:44.981111 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.036538 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.159654 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tv8h9_cae818e5-34d5-43c7-95af-e82e21309758/registry-server/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.264321 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-utilities/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.266783 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.391325 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-utilities/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.565346 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4mqss_dd697e52-9abd-4be8-a245-625d1dde804e/registry-server/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.610102 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.631583 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-utilities/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.631998 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.800566 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-content/0.log" Nov 22 09:15:45 crc kubenswrapper[4853]: I1122 09:15:45.836634 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/extract-utilities/0.log" Nov 22 09:15:46 crc kubenswrapper[4853]: I1122 09:15:46.857537 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m28tt_c1265d82-d3bb-4d83-bb9e-05cbb5960004/registry-server/0.log" Nov 22 09:15:52 crc kubenswrapper[4853]: I1122 09:15:52.748032 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:15:52 crc kubenswrapper[4853]: E1122 09:15:52.748927 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:15:58 crc kubenswrapper[4853]: I1122 09:15:58.697303 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-p974c_f95bfaef-313c-4412-a8ce-ab9e8bd2d244/prometheus-operator/0.log" Nov 22 09:15:58 crc kubenswrapper[4853]: I1122 09:15:58.892224 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75686ff6c7-b7jq4_6204a708-d77f-4350-806f-25ef39e98551/prometheus-operator-admission-webhook/0.log" Nov 22 09:15:58 crc kubenswrapper[4853]: I1122 09:15:58.976159 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75686ff6c7-dppxx_988cd804-b3e5-4b0f-aec4-cc7186845189/prometheus-operator-admission-webhook/0.log" Nov 22 09:15:59 crc kubenswrapper[4853]: I1122 09:15:59.134626 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-6mnv6_838479bf-7b77-403c-915a-ed8b62d9c970/operator/0.log" Nov 22 09:15:59 crc kubenswrapper[4853]: I1122 09:15:59.203882 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-jlcqj_1dcce692-834e-48e2-bcfd-7c0f05480fb4/observability-ui-dashboards/0.log" Nov 22 09:15:59 crc kubenswrapper[4853]: I1122 09:15:59.370182 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-56f68_0bea3315-6c33-4754-95a6-e465983de5b7/perses-operator/0.log" Nov 22 09:16:05 crc kubenswrapper[4853]: I1122 09:16:05.749688 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:16:05 crc kubenswrapper[4853]: E1122 09:16:05.750676 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:16:09 crc kubenswrapper[4853]: I1122 09:16:09.554846 4853 scope.go:117] "RemoveContainer" containerID="550e39fdbc44a692eb62a9d94e3f96c605a8f206519e1d948e8043b6585c600b" Nov 22 09:16:11 crc kubenswrapper[4853]: I1122 09:16:11.697705 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/kube-rbac-proxy/0.log" Nov 22 09:16:11 crc kubenswrapper[4853]: I1122 09:16:11.744717 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/manager/2.log" Nov 22 09:16:11 crc kubenswrapper[4853]: I1122 09:16:11.763170 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5bb8bb4577-rspn5_50b94c6e-d5b7-4720-af4c-8922035ca146/manager/1.log" Nov 22 09:16:16 crc kubenswrapper[4853]: I1122 09:16:16.748123 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:16:16 crc kubenswrapper[4853]: E1122 09:16:16.748979 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:16:31 crc kubenswrapper[4853]: I1122 09:16:31.748143 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:16:31 crc kubenswrapper[4853]: E1122 09:16:31.749089 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:16:35 crc kubenswrapper[4853]: E1122 09:16:35.885147 4853 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.251:46020->38.102.83.251:37237: read tcp 38.102.83.251:46020->38.102.83.251:37237: read: connection reset by peer Nov 22 09:16:46 crc kubenswrapper[4853]: I1122 09:16:46.749124 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:16:46 crc kubenswrapper[4853]: E1122 09:16:46.750513 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:16:59 crc kubenswrapper[4853]: I1122 09:16:59.748343 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:16:59 crc kubenswrapper[4853]: E1122 09:16:59.749455 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:17:12 crc kubenswrapper[4853]: I1122 09:17:12.748611 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:17:12 crc kubenswrapper[4853]: E1122 09:17:12.749767 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:17:24 crc kubenswrapper[4853]: I1122 09:17:24.747656 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:17:24 crc kubenswrapper[4853]: E1122 09:17:24.748467 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:17:35 crc kubenswrapper[4853]: I1122 09:17:35.761810 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:17:35 crc kubenswrapper[4853]: E1122 09:17:35.762472 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:17:49 crc kubenswrapper[4853]: I1122 09:17:49.748538 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:17:49 crc kubenswrapper[4853]: E1122 09:17:49.749423 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:18:01 crc kubenswrapper[4853]: I1122 09:18:01.748285 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:18:02 crc kubenswrapper[4853]: I1122 09:18:02.081388 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7"} Nov 22 09:18:08 crc kubenswrapper[4853]: I1122 09:18:08.189791 4853 generic.go:334] "Generic (PLEG): container finished" podID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerID="d108c542705a6ea728fe159c191713898223ef44b216f8eb03c739079541c756" exitCode=0 Nov 22 09:18:08 crc kubenswrapper[4853]: I1122 09:18:08.189881 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nq2xj/must-gather-vff7c" event={"ID":"0fa2dc9e-4884-499d-921c-ac6656e3d300","Type":"ContainerDied","Data":"d108c542705a6ea728fe159c191713898223ef44b216f8eb03c739079541c756"} Nov 22 09:18:08 crc kubenswrapper[4853]: I1122 09:18:08.191390 4853 scope.go:117] "RemoveContainer" containerID="d108c542705a6ea728fe159c191713898223ef44b216f8eb03c739079541c756" Nov 22 09:18:09 crc kubenswrapper[4853]: I1122 09:18:09.112890 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nq2xj_must-gather-vff7c_0fa2dc9e-4884-499d-921c-ac6656e3d300/gather/0.log" Nov 22 09:18:09 crc kubenswrapper[4853]: I1122 09:18:09.677537 4853 scope.go:117] "RemoveContainer" containerID="597227cfc86e95bbae46855fb155d433f593957571b26cb120e198c1fa6baed4" Nov 22 09:18:17 crc kubenswrapper[4853]: I1122 09:18:17.798274 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nq2xj/must-gather-vff7c"] Nov 22 09:18:17 crc kubenswrapper[4853]: I1122 09:18:17.799258 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nq2xj/must-gather-vff7c" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="copy" containerID="cri-o://38893a5c5533845f5d723578a8677f657f0131dd3dc20fb9bc5c684aedb6b761" gracePeriod=2 Nov 22 09:18:17 crc kubenswrapper[4853]: I1122 09:18:17.814566 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nq2xj/must-gather-vff7c"] Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.297422 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nq2xj_must-gather-vff7c_0fa2dc9e-4884-499d-921c-ac6656e3d300/copy/0.log" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.298112 4853 generic.go:334] "Generic (PLEG): container finished" podID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerID="38893a5c5533845f5d723578a8677f657f0131dd3dc20fb9bc5c684aedb6b761" exitCode=143 Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.298169 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611438703bbcbaf7f4944bf3b47d131b22cda30eaf0f664e167fce85a7f43bc2" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.301337 4853 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nq2xj_must-gather-vff7c_0fa2dc9e-4884-499d-921c-ac6656e3d300/copy/0.log" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.301657 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.411581 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output\") pod \"0fa2dc9e-4884-499d-921c-ac6656e3d300\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.411825 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkqmv\" (UniqueName: \"kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv\") pod \"0fa2dc9e-4884-499d-921c-ac6656e3d300\" (UID: \"0fa2dc9e-4884-499d-921c-ac6656e3d300\") " Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.419486 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv" (OuterVolumeSpecName: "kube-api-access-wkqmv") pod "0fa2dc9e-4884-499d-921c-ac6656e3d300" (UID: "0fa2dc9e-4884-499d-921c-ac6656e3d300"). InnerVolumeSpecName "kube-api-access-wkqmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.515401 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkqmv\" (UniqueName: \"kubernetes.io/projected/0fa2dc9e-4884-499d-921c-ac6656e3d300-kube-api-access-wkqmv\") on node \"crc\" DevicePath \"\"" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.601405 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0fa2dc9e-4884-499d-921c-ac6656e3d300" (UID: "0fa2dc9e-4884-499d-921c-ac6656e3d300"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:18:18 crc kubenswrapper[4853]: I1122 09:18:18.619636 4853 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0fa2dc9e-4884-499d-921c-ac6656e3d300-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 22 09:18:19 crc kubenswrapper[4853]: I1122 09:18:19.331107 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nq2xj/must-gather-vff7c" Nov 22 09:18:19 crc kubenswrapper[4853]: I1122 09:18:19.761471 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" path="/var/lib/kubelet/pods/0fa2dc9e-4884-499d-921c-ac6656e3d300/volumes" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.653261 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:18:48 crc kubenswrapper[4853]: E1122 09:18:48.654242 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bb15d6-7194-4e24-9514-d9745bbbefdc" containerName="collect-profiles" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654255 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bb15d6-7194-4e24-9514-d9745bbbefdc" containerName="collect-profiles" Nov 22 09:18:48 crc kubenswrapper[4853]: E1122 09:18:48.654298 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="gather" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654304 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="gather" Nov 22 09:18:48 crc kubenswrapper[4853]: E1122 09:18:48.654312 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="copy" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654318 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="copy" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654526 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="gather" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654545 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bb15d6-7194-4e24-9514-d9745bbbefdc" containerName="collect-profiles" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.654560 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa2dc9e-4884-499d-921c-ac6656e3d300" containerName="copy" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.656253 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.663989 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.759217 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.759284 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.759386 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s22n\" (UniqueName: \"kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.862359 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.862730 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.862778 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s22n\" (UniqueName: \"kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.863861 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.864052 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.886619 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s22n\" (UniqueName: \"kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n\") pod \"redhat-marketplace-vwqg4\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:48 crc kubenswrapper[4853]: I1122 09:18:48.980052 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:49 crc kubenswrapper[4853]: I1122 09:18:49.463168 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:18:49 crc kubenswrapper[4853]: I1122 09:18:49.662397 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerStarted","Data":"68d6d6d5da1f44a9361e70168863ab3502177d7a401c503289719518e3ca0f4e"} Nov 22 09:18:50 crc kubenswrapper[4853]: I1122 09:18:50.675263 4853 generic.go:334] "Generic (PLEG): container finished" podID="d14f7005-580c-4915-ac60-b521854915b7" containerID="a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929" exitCode=0 Nov 22 09:18:50 crc kubenswrapper[4853]: I1122 09:18:50.675340 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerDied","Data":"a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929"} Nov 22 09:18:51 crc kubenswrapper[4853]: I1122 09:18:51.689318 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerStarted","Data":"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3"} Nov 22 09:18:52 crc kubenswrapper[4853]: I1122 09:18:52.702882 4853 generic.go:334] "Generic (PLEG): container finished" podID="d14f7005-580c-4915-ac60-b521854915b7" containerID="109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3" exitCode=0 Nov 22 09:18:52 crc kubenswrapper[4853]: I1122 09:18:52.702930 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerDied","Data":"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3"} Nov 22 09:18:53 crc kubenswrapper[4853]: I1122 09:18:53.715739 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerStarted","Data":"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683"} Nov 22 09:18:53 crc kubenswrapper[4853]: I1122 09:18:53.737088 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vwqg4" podStartSLOduration=3.264236273 podStartE2EDuration="5.737065994s" podCreationTimestamp="2025-11-22 09:18:48 +0000 UTC" firstStartedPulling="2025-11-22 09:18:50.677494149 +0000 UTC m=+7729.518116775" lastFinishedPulling="2025-11-22 09:18:53.15032387 +0000 UTC m=+7731.990946496" observedRunningTime="2025-11-22 09:18:53.728682338 +0000 UTC m=+7732.569304964" watchObservedRunningTime="2025-11-22 09:18:53.737065994 +0000 UTC m=+7732.577688630" Nov 22 09:18:58 crc kubenswrapper[4853]: I1122 09:18:58.980726 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:58 crc kubenswrapper[4853]: I1122 09:18:58.981153 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:59 crc kubenswrapper[4853]: I1122 09:18:59.029630 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:59 crc kubenswrapper[4853]: I1122 09:18:59.843036 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:18:59 crc kubenswrapper[4853]: I1122 09:18:59.891411 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:19:01 crc kubenswrapper[4853]: I1122 09:19:01.804403 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vwqg4" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="registry-server" containerID="cri-o://f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683" gracePeriod=2 Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.300558 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.396475 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content\") pod \"d14f7005-580c-4915-ac60-b521854915b7\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.396821 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities\") pod \"d14f7005-580c-4915-ac60-b521854915b7\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.396986 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s22n\" (UniqueName: \"kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n\") pod \"d14f7005-580c-4915-ac60-b521854915b7\" (UID: \"d14f7005-580c-4915-ac60-b521854915b7\") " Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.397709 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities" (OuterVolumeSpecName: "utilities") pod "d14f7005-580c-4915-ac60-b521854915b7" (UID: "d14f7005-580c-4915-ac60-b521854915b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.399164 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.415071 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d14f7005-580c-4915-ac60-b521854915b7" (UID: "d14f7005-580c-4915-ac60-b521854915b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.502096 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d14f7005-580c-4915-ac60-b521854915b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.659227 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n" (OuterVolumeSpecName: "kube-api-access-4s22n") pod "d14f7005-580c-4915-ac60-b521854915b7" (UID: "d14f7005-580c-4915-ac60-b521854915b7"). InnerVolumeSpecName "kube-api-access-4s22n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.706468 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s22n\" (UniqueName: \"kubernetes.io/projected/d14f7005-580c-4915-ac60-b521854915b7-kube-api-access-4s22n\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.819437 4853 generic.go:334] "Generic (PLEG): container finished" podID="d14f7005-580c-4915-ac60-b521854915b7" containerID="f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683" exitCode=0 Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.819489 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerDied","Data":"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683"} Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.819521 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vwqg4" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.819547 4853 scope.go:117] "RemoveContainer" containerID="f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.819531 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vwqg4" event={"ID":"d14f7005-580c-4915-ac60-b521854915b7","Type":"ContainerDied","Data":"68d6d6d5da1f44a9361e70168863ab3502177d7a401c503289719518e3ca0f4e"} Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.842466 4853 scope.go:117] "RemoveContainer" containerID="109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.859527 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.873309 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vwqg4"] Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.893139 4853 scope.go:117] "RemoveContainer" containerID="a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.950328 4853 scope.go:117] "RemoveContainer" containerID="f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683" Nov 22 09:19:02 crc kubenswrapper[4853]: E1122 09:19:02.951191 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683\": container with ID starting with f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683 not found: ID does not exist" containerID="f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.951230 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683"} err="failed to get container status \"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683\": rpc error: code = NotFound desc = could not find container \"f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683\": container with ID starting with f48b1c4d075d87f1dfc7e22627eafa2e749bd40e4425f8854c119fa4969f7683 not found: ID does not exist" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.951250 4853 scope.go:117] "RemoveContainer" containerID="109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3" Nov 22 09:19:02 crc kubenswrapper[4853]: E1122 09:19:02.953662 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3\": container with ID starting with 109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3 not found: ID does not exist" containerID="109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.953706 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3"} err="failed to get container status \"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3\": rpc error: code = NotFound desc = could not find container \"109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3\": container with ID starting with 109f86d791017c6b534847cb61ed440b6ea94d9cb22461fbdbb088543e3b1df3 not found: ID does not exist" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.953732 4853 scope.go:117] "RemoveContainer" containerID="a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929" Nov 22 09:19:02 crc kubenswrapper[4853]: E1122 09:19:02.954584 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929\": container with ID starting with a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929 not found: ID does not exist" containerID="a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929" Nov 22 09:19:02 crc kubenswrapper[4853]: I1122 09:19:02.954640 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929"} err="failed to get container status \"a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929\": rpc error: code = NotFound desc = could not find container \"a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929\": container with ID starting with a918d5442827375d585e58d8e09c87a3b52e1dc0e97fda4e8dcfe1a5b8f23929 not found: ID does not exist" Nov 22 09:19:03 crc kubenswrapper[4853]: I1122 09:19:03.762228 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d14f7005-580c-4915-ac60-b521854915b7" path="/var/lib/kubelet/pods/d14f7005-580c-4915-ac60-b521854915b7/volumes" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.821795 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:06 crc kubenswrapper[4853]: E1122 09:19:06.823051 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="registry-server" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.823072 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="registry-server" Nov 22 09:19:06 crc kubenswrapper[4853]: E1122 09:19:06.823119 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="extract-content" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.823128 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="extract-content" Nov 22 09:19:06 crc kubenswrapper[4853]: E1122 09:19:06.823163 4853 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="extract-utilities" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.823174 4853 state_mem.go:107] "Deleted CPUSet assignment" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="extract-utilities" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.823529 4853 memory_manager.go:354] "RemoveStaleState removing state" podUID="d14f7005-580c-4915-ac60-b521854915b7" containerName="registry-server" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.826216 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.834452 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.909807 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdchp\" (UniqueName: \"kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.910224 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:06 crc kubenswrapper[4853]: I1122 09:19:06.910347 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.013297 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdchp\" (UniqueName: \"kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.013419 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.013454 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.014061 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.014092 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.031194 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdchp\" (UniqueName: \"kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp\") pod \"community-operators-6hk2v\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.164302 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.619988 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:07 crc kubenswrapper[4853]: W1122 09:19:07.628545 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb222b380_0fca_4527_9b05_ce205a8ba0e2.slice/crio-ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a WatchSource:0}: Error finding container ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a: Status 404 returned error can't find the container with id ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.892892 4853 generic.go:334] "Generic (PLEG): container finished" podID="b222b380-0fca-4527-9b05-ce205a8ba0e2" containerID="1bc92c19a1c3dda16c7d70e141b6b0644afb3122d6ae96f04975d20e72218f8a" exitCode=0 Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.892969 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerDied","Data":"1bc92c19a1c3dda16c7d70e141b6b0644afb3122d6ae96f04975d20e72218f8a"} Nov 22 09:19:07 crc kubenswrapper[4853]: I1122 09:19:07.893000 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerStarted","Data":"ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a"} Nov 22 09:19:08 crc kubenswrapper[4853]: I1122 09:19:08.904719 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerStarted","Data":"9cdbb148ebb918dc8e239d29a1e736c69ad2afb1c1766d8969c513abeb94e2e3"} Nov 22 09:19:09 crc kubenswrapper[4853]: I1122 09:19:09.765459 4853 scope.go:117] "RemoveContainer" containerID="9c6aed72818c029ec439bfbd9a8e694361168497429f5a6d63162e987293a5f1" Nov 22 09:19:09 crc kubenswrapper[4853]: I1122 09:19:09.817426 4853 scope.go:117] "RemoveContainer" containerID="38893a5c5533845f5d723578a8677f657f0131dd3dc20fb9bc5c684aedb6b761" Nov 22 09:19:09 crc kubenswrapper[4853]: I1122 09:19:09.862075 4853 scope.go:117] "RemoveContainer" containerID="d108c542705a6ea728fe159c191713898223ef44b216f8eb03c739079541c756" Nov 22 09:19:10 crc kubenswrapper[4853]: I1122 09:19:10.936247 4853 generic.go:334] "Generic (PLEG): container finished" podID="b222b380-0fca-4527-9b05-ce205a8ba0e2" containerID="9cdbb148ebb918dc8e239d29a1e736c69ad2afb1c1766d8969c513abeb94e2e3" exitCode=0 Nov 22 09:19:10 crc kubenswrapper[4853]: I1122 09:19:10.936607 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerDied","Data":"9cdbb148ebb918dc8e239d29a1e736c69ad2afb1c1766d8969c513abeb94e2e3"} Nov 22 09:19:11 crc kubenswrapper[4853]: I1122 09:19:11.952123 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerStarted","Data":"6b12e0f65ddee3a6c45a8274cb8639680eb412ab9824cd89d0deef22b833009c"} Nov 22 09:19:11 crc kubenswrapper[4853]: I1122 09:19:11.981012 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6hk2v" podStartSLOduration=2.521432298 podStartE2EDuration="5.98099044s" podCreationTimestamp="2025-11-22 09:19:06 +0000 UTC" firstStartedPulling="2025-11-22 09:19:07.89664464 +0000 UTC m=+7746.737267266" lastFinishedPulling="2025-11-22 09:19:11.356202782 +0000 UTC m=+7750.196825408" observedRunningTime="2025-11-22 09:19:11.977999349 +0000 UTC m=+7750.818621975" watchObservedRunningTime="2025-11-22 09:19:11.98099044 +0000 UTC m=+7750.821613066" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.384520 4853 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.387907 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.420948 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.448337 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.448817 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.449210 4853 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mfgg\" (UniqueName: \"kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.551890 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mfgg\" (UniqueName: \"kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.552585 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.552656 4853 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.553167 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.553258 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.587691 4853 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mfgg\" (UniqueName: \"kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg\") pod \"redhat-operators-mkqhj\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:16 crc kubenswrapper[4853]: I1122 09:19:16.731079 4853 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:17 crc kubenswrapper[4853]: I1122 09:19:17.164682 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:17 crc kubenswrapper[4853]: I1122 09:19:17.165285 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:17 crc kubenswrapper[4853]: I1122 09:19:17.196819 4853 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:17 crc kubenswrapper[4853]: W1122 09:19:17.201357 4853 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod574b712c_7fbd_4822_ab9f_f76f0833c50c.slice/crio-3c03effe2422baff869da3b5ceb12bbb414d8cf84ad59d67ff6dd46294f50578 WatchSource:0}: Error finding container 3c03effe2422baff869da3b5ceb12bbb414d8cf84ad59d67ff6dd46294f50578: Status 404 returned error can't find the container with id 3c03effe2422baff869da3b5ceb12bbb414d8cf84ad59d67ff6dd46294f50578 Nov 22 09:19:18 crc kubenswrapper[4853]: I1122 09:19:18.019955 4853 generic.go:334] "Generic (PLEG): container finished" podID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerID="b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0" exitCode=0 Nov 22 09:19:18 crc kubenswrapper[4853]: I1122 09:19:18.020115 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerDied","Data":"b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0"} Nov 22 09:19:18 crc kubenswrapper[4853]: I1122 09:19:18.020381 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerStarted","Data":"3c03effe2422baff869da3b5ceb12bbb414d8cf84ad59d67ff6dd46294f50578"} Nov 22 09:19:18 crc kubenswrapper[4853]: I1122 09:19:18.221546 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6hk2v" podUID="b222b380-0fca-4527-9b05-ce205a8ba0e2" containerName="registry-server" probeResult="failure" output=< Nov 22 09:19:18 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:19:18 crc kubenswrapper[4853]: > Nov 22 09:19:19 crc kubenswrapper[4853]: I1122 09:19:19.037577 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerStarted","Data":"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6"} Nov 22 09:19:24 crc kubenswrapper[4853]: I1122 09:19:24.094357 4853 generic.go:334] "Generic (PLEG): container finished" podID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerID="94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6" exitCode=0 Nov 22 09:19:24 crc kubenswrapper[4853]: I1122 09:19:24.094455 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerDied","Data":"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6"} Nov 22 09:19:25 crc kubenswrapper[4853]: I1122 09:19:25.108552 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerStarted","Data":"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57"} Nov 22 09:19:25 crc kubenswrapper[4853]: I1122 09:19:25.137405 4853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkqhj" podStartSLOduration=2.671519571 podStartE2EDuration="9.137377434s" podCreationTimestamp="2025-11-22 09:19:16 +0000 UTC" firstStartedPulling="2025-11-22 09:19:18.022577703 +0000 UTC m=+7756.863200329" lastFinishedPulling="2025-11-22 09:19:24.488435556 +0000 UTC m=+7763.329058192" observedRunningTime="2025-11-22 09:19:25.124506508 +0000 UTC m=+7763.965129134" watchObservedRunningTime="2025-11-22 09:19:25.137377434 +0000 UTC m=+7763.978000060" Nov 22 09:19:26 crc kubenswrapper[4853]: I1122 09:19:26.731198 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:26 crc kubenswrapper[4853]: I1122 09:19:26.731534 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:27 crc kubenswrapper[4853]: I1122 09:19:27.215380 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:27 crc kubenswrapper[4853]: I1122 09:19:27.268298 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:27 crc kubenswrapper[4853]: I1122 09:19:27.459795 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:27 crc kubenswrapper[4853]: I1122 09:19:27.781275 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkqhj" podUID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:19:27 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:19:27 crc kubenswrapper[4853]: > Nov 22 09:19:29 crc kubenswrapper[4853]: I1122 09:19:29.152250 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6hk2v" podUID="b222b380-0fca-4527-9b05-ce205a8ba0e2" containerName="registry-server" containerID="cri-o://6b12e0f65ddee3a6c45a8274cb8639680eb412ab9824cd89d0deef22b833009c" gracePeriod=2 Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.166365 4853 generic.go:334] "Generic (PLEG): container finished" podID="b222b380-0fca-4527-9b05-ce205a8ba0e2" containerID="6b12e0f65ddee3a6c45a8274cb8639680eb412ab9824cd89d0deef22b833009c" exitCode=0 Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.166426 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerDied","Data":"6b12e0f65ddee3a6c45a8274cb8639680eb412ab9824cd89d0deef22b833009c"} Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.167071 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6hk2v" event={"ID":"b222b380-0fca-4527-9b05-ce205a8ba0e2","Type":"ContainerDied","Data":"ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a"} Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.167088 4853 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2a65c047aa77f9ae89648cea9856d82a07d6bfb59c3668388b9552bd18041a" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.222411 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.297407 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities\") pod \"b222b380-0fca-4527-9b05-ce205a8ba0e2\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.297485 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdchp\" (UniqueName: \"kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp\") pod \"b222b380-0fca-4527-9b05-ce205a8ba0e2\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.297530 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content\") pod \"b222b380-0fca-4527-9b05-ce205a8ba0e2\" (UID: \"b222b380-0fca-4527-9b05-ce205a8ba0e2\") " Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.298072 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities" (OuterVolumeSpecName: "utilities") pod "b222b380-0fca-4527-9b05-ce205a8ba0e2" (UID: "b222b380-0fca-4527-9b05-ce205a8ba0e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.298842 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.313473 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp" (OuterVolumeSpecName: "kube-api-access-rdchp") pod "b222b380-0fca-4527-9b05-ce205a8ba0e2" (UID: "b222b380-0fca-4527-9b05-ce205a8ba0e2"). InnerVolumeSpecName "kube-api-access-rdchp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.349568 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b222b380-0fca-4527-9b05-ce205a8ba0e2" (UID: "b222b380-0fca-4527-9b05-ce205a8ba0e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.400955 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdchp\" (UniqueName: \"kubernetes.io/projected/b222b380-0fca-4527-9b05-ce205a8ba0e2-kube-api-access-rdchp\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:30 crc kubenswrapper[4853]: I1122 09:19:30.400991 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b222b380-0fca-4527-9b05-ce205a8ba0e2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:31 crc kubenswrapper[4853]: I1122 09:19:31.181372 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6hk2v" Nov 22 09:19:31 crc kubenswrapper[4853]: I1122 09:19:31.235797 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:31 crc kubenswrapper[4853]: I1122 09:19:31.248844 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6hk2v"] Nov 22 09:19:31 crc kubenswrapper[4853]: I1122 09:19:31.764165 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b222b380-0fca-4527-9b05-ce205a8ba0e2" path="/var/lib/kubelet/pods/b222b380-0fca-4527-9b05-ce205a8ba0e2/volumes" Nov 22 09:19:37 crc kubenswrapper[4853]: I1122 09:19:37.779938 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkqhj" podUID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:19:37 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:19:37 crc kubenswrapper[4853]: > Nov 22 09:19:47 crc kubenswrapper[4853]: I1122 09:19:47.776176 4853 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mkqhj" podUID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:19:47 crc kubenswrapper[4853]: timeout: failed to connect service ":50051" within 1s Nov 22 09:19:47 crc kubenswrapper[4853]: > Nov 22 09:19:56 crc kubenswrapper[4853]: I1122 09:19:56.779303 4853 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:56 crc kubenswrapper[4853]: I1122 09:19:56.846630 4853 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:57 crc kubenswrapper[4853]: I1122 09:19:57.022423 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:58 crc kubenswrapper[4853]: I1122 09:19:58.459924 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkqhj" podUID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerName="registry-server" containerID="cri-o://9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57" gracePeriod=2 Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.264945 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.308930 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content\") pod \"574b712c-7fbd-4822-ab9f-f76f0833c50c\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.309040 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities\") pod \"574b712c-7fbd-4822-ab9f-f76f0833c50c\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.309196 4853 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mfgg\" (UniqueName: \"kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg\") pod \"574b712c-7fbd-4822-ab9f-f76f0833c50c\" (UID: \"574b712c-7fbd-4822-ab9f-f76f0833c50c\") " Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.309849 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities" (OuterVolumeSpecName: "utilities") pod "574b712c-7fbd-4822-ab9f-f76f0833c50c" (UID: "574b712c-7fbd-4822-ab9f-f76f0833c50c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.325394 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg" (OuterVolumeSpecName: "kube-api-access-2mfgg") pod "574b712c-7fbd-4822-ab9f-f76f0833c50c" (UID: "574b712c-7fbd-4822-ab9f-f76f0833c50c"). InnerVolumeSpecName "kube-api-access-2mfgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.401118 4853 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "574b712c-7fbd-4822-ab9f-f76f0833c50c" (UID: "574b712c-7fbd-4822-ab9f-f76f0833c50c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.412536 4853 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.412588 4853 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574b712c-7fbd-4822-ab9f-f76f0833c50c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.412599 4853 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mfgg\" (UniqueName: \"kubernetes.io/projected/574b712c-7fbd-4822-ab9f-f76f0833c50c-kube-api-access-2mfgg\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.476006 4853 generic.go:334] "Generic (PLEG): container finished" podID="574b712c-7fbd-4822-ab9f-f76f0833c50c" containerID="9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57" exitCode=0 Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.476090 4853 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkqhj" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.476079 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerDied","Data":"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57"} Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.476255 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkqhj" event={"ID":"574b712c-7fbd-4822-ab9f-f76f0833c50c","Type":"ContainerDied","Data":"3c03effe2422baff869da3b5ceb12bbb414d8cf84ad59d67ff6dd46294f50578"} Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.476318 4853 scope.go:117] "RemoveContainer" containerID="9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.505010 4853 scope.go:117] "RemoveContainer" containerID="94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.520874 4853 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.536039 4853 scope.go:117] "RemoveContainer" containerID="b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.536354 4853 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkqhj"] Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.597347 4853 scope.go:117] "RemoveContainer" containerID="9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57" Nov 22 09:19:59 crc kubenswrapper[4853]: E1122 09:19:59.597688 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57\": container with ID starting with 9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57 not found: ID does not exist" containerID="9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.597737 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57"} err="failed to get container status \"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57\": rpc error: code = NotFound desc = could not find container \"9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57\": container with ID starting with 9be6ba5c6630ee6cf40af32cff70add7110047161421a5a53f7276029e6a2a57 not found: ID does not exist" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.597810 4853 scope.go:117] "RemoveContainer" containerID="94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6" Nov 22 09:19:59 crc kubenswrapper[4853]: E1122 09:19:59.598194 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6\": container with ID starting with 94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6 not found: ID does not exist" containerID="94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.598227 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6"} err="failed to get container status \"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6\": rpc error: code = NotFound desc = could not find container \"94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6\": container with ID starting with 94f340c390cafe5f04a546479bb5b2d850c0ea33296c0460f2ab4800e3993de6 not found: ID does not exist" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.598244 4853 scope.go:117] "RemoveContainer" containerID="b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0" Nov 22 09:19:59 crc kubenswrapper[4853]: E1122 09:19:59.598945 4853 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0\": container with ID starting with b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0 not found: ID does not exist" containerID="b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.599037 4853 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0"} err="failed to get container status \"b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0\": rpc error: code = NotFound desc = could not find container \"b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0\": container with ID starting with b25f247553961adf9d3948dbcc76d0096d2f43222f1e262c6312a070022fe9c0 not found: ID does not exist" Nov 22 09:19:59 crc kubenswrapper[4853]: I1122 09:19:59.769100 4853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574b712c-7fbd-4822-ab9f-f76f0833c50c" path="/var/lib/kubelet/pods/574b712c-7fbd-4822-ab9f-f76f0833c50c/volumes" Nov 22 09:20:01 crc kubenswrapper[4853]: I1122 09:20:01.297608 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:20:01 crc kubenswrapper[4853]: I1122 09:20:01.297975 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:20:31 crc kubenswrapper[4853]: I1122 09:20:31.297433 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:20:31 crc kubenswrapper[4853]: I1122 09:20:31.298064 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.297959 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.298591 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.298636 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.300120 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.300217 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7" gracePeriod=600 Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.500652 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7" exitCode=0 Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.500726 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7"} Nov 22 09:21:01 crc kubenswrapper[4853]: I1122 09:21:01.501045 4853 scope.go:117] "RemoveContainer" containerID="29225cba0575a3d4b630f8271793742f602b91f03b413d4d79e223b8328f134c" Nov 22 09:21:02 crc kubenswrapper[4853]: I1122 09:21:02.512815 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerStarted","Data":"4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e"} Nov 22 09:23:01 crc kubenswrapper[4853]: I1122 09:23:01.297848 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:23:01 crc kubenswrapper[4853]: I1122 09:23:01.298738 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:23:31 crc kubenswrapper[4853]: I1122 09:23:31.297369 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:23:31 crc kubenswrapper[4853]: I1122 09:23:31.298042 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:24:01 crc kubenswrapper[4853]: I1122 09:24:01.298038 4853 patch_prober.go:28] interesting pod/machine-config-daemon-fflvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:24:01 crc kubenswrapper[4853]: I1122 09:24:01.298502 4853 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:24:01 crc kubenswrapper[4853]: I1122 09:24:01.298551 4853 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" Nov 22 09:24:01 crc kubenswrapper[4853]: I1122 09:24:01.299433 4853 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e"} pod="openshift-machine-config-operator/machine-config-daemon-fflvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:24:01 crc kubenswrapper[4853]: I1122 09:24:01.299486 4853 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" containerName="machine-config-daemon" containerID="cri-o://4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e" gracePeriod=600 Nov 22 09:24:01 crc kubenswrapper[4853]: E1122 09:24:01.421908 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8" Nov 22 09:24:02 crc kubenswrapper[4853]: I1122 09:24:02.438329 4853 generic.go:334] "Generic (PLEG): container finished" podID="476c875a-2b87-419a-8042-0ba059620fd8" containerID="4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e" exitCode=0 Nov 22 09:24:02 crc kubenswrapper[4853]: I1122 09:24:02.438403 4853 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" event={"ID":"476c875a-2b87-419a-8042-0ba059620fd8","Type":"ContainerDied","Data":"4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e"} Nov 22 09:24:02 crc kubenswrapper[4853]: I1122 09:24:02.438484 4853 scope.go:117] "RemoveContainer" containerID="9565704f2aede7f57e782b8c5835deae633e79252927251bdaa026f251bed3e7" Nov 22 09:24:02 crc kubenswrapper[4853]: I1122 09:24:02.439894 4853 scope.go:117] "RemoveContainer" containerID="4300f372989cca3b317ce99c1c43b8692a686c270fd663955138f977fb6a846e" Nov 22 09:24:02 crc kubenswrapper[4853]: E1122 09:24:02.440356 4853 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fflvd_openshift-machine-config-operator(476c875a-2b87-419a-8042-0ba059620fd8)\"" pod="openshift-machine-config-operator/machine-config-daemon-fflvd" podUID="476c875a-2b87-419a-8042-0ba059620fd8"